- 浏览: 220398 次
- 性别:
- 来自: 上海
文章分类
最新评论
-
Breather.杨:
斯库伊!受教
基于按annotation的hibernate主键生成策略 -
w420372197:
很详细,学习中..转载了
基于按annotation的hibernate主键生成策略 -
wslovenide:
...
基于按annotation的hibernate主键生成策略 -
Navee:
写的十分详细!感谢
基于按annotation的hibernate主键生成策略 -
eric.cheng:
很好,学习了
基于按annotation的hibernate主键生成策略
LinkedIn Architecture
At JavaOne 2008 , LinkedIn employees presented two sessions about the LinkedIn architecture. The slides are available online:
- LinkedIn - A Professional Social Network Built with Java™ Technologies and Agile Practices
- LinkedIn Communication Architecture
These slides are hosted at SlideShare. If you register then you can download them as PDF’s.
This post summarizes the key parts of the LinkedIn architecture. It’s based on the presentations above, and on additional comments made during the presentation at JavaOne.
Site Statistics
- 22 million members
- 4+ million unique visitors/month
- 40 million page views/day
- 2 million searches/day
- 250K invitations sent/day
- 1 million answers posted
- 2 million email messages/day
Software
- Solaris (running on Sun x86 platform and Sparc)
- Tomcat and Jetty as application servers
- Oracle and MySQL as DBs
- No ORM (such as Hibernate); they use straight JDBC
- ActiveMQ for JMS. (It’s partitioned by type of messages. Backed by MySQL.)
- Lucene as a foundation for search
- Spring as glue
Server Architecture
2003-2005
- One monolithic web application
- One database: the Core Database
- The network graph is cached in memory in The Cloud
- Members Search implemented using Lucene. It runs on the same server as The Cloud, because member searches must be filtered according to the searching user’s network, so it’s convenient to have Lucene on the same machine as The Cloud.
- WebApp updates the Core Database directly. The Core Database updates The Cloud.
2006
- Added Replica DB’s , to reduce the load on the Core Database. They contain read-only data. A RepDB server manages updates of the Replica DB’s.
- Moved Search out of The Cloud and into its own server.
- Changed the way updates are handled, by adding the Databus
. This is a central component that distributes updates to any component that needs them. This is the new updates flow:
- Changes originate in the WebApp
- The WebApp updates the Core Database
- The Core Database sends updates to the Databus
- The Databus sends the updates to: the Replica DB’s, The Cloud, and Search
2008
- The WebApp doesn’t do everything itself anymore: they split parts of its business logic into Services
.
The WebApp still presents the GUI to the user, but now it calls Services to manipulate the Profile, Groups, etc. - Each Service has its own domain-specific database (i.e., vertical partitioning).
- This architecture allows other applications (besides the main WebApp) to access LinkedIn. They’ve added applications for Recruiters, Ads, etc.
The Cloud
- The Cloud is a server that caches the entire LinkedIn network graph in memory.
- Network size: 22M nodes, 120M edges.
- Requires 12 GB RAM .
- There are 40 instances in production
- Rebuilding an instance of The Cloud from disk takes 8 hours .
- The Cloud is updated in real-time using the Databus.
- Persisted to disk on shutdown.
- The cache is implemented in C++, accessed via JNI. They chose C++ instead of Java for two reasons:
- To use as little RAM as possible.
- Garbage Collection pauses were killing them. [LinkedIn said they were using advanced GC's, but GC's have improved since 2003; is this still a problem today?]
- Having to keep everything in RAM is a limitation, but as LinkedIn have pointed out, partitioning graphs is hard.
- [Sun offers servers with up to 2 TB of RAM (Sun SPARC Enterprise M9000 Server ), so LinkedIn could support up to 1.1 billion users before they run out of memory. (This calculation is based only on the number of nodes, not edges). Price is another matter: Sun say only "contact us for price", which is ominous considering that the prices they do list go up to $30,000.]
The Cloud caches the entire LinkedIn Network, but each user needs to see the network from his own point of view. It’s computationally expensive to calculate that, so they do it just once when a user session begins, and keep it cached. That takes up to 2 MB of RAM per user. This cached network is not updated during the session. (It is updated if the user himself adds/removes a link, but not if any of the user’s contacts make changes. LinkedIn says users won’t notice this.)
As an aside, they use Ehcache to cache members’ profiles. They cache up to 2 million profiles (out of 22 million members). They tried caching using LFU algorithm (Least Frequently Used), but found that Ehcache would sometimes block for 30 seconds while recalculating LFU, so they switched to LRU (Least Recently Used).
Communication Architecture
Communication Service
The Communication Service is responsible for permanent messages , e.g. InBox messages and emails.
- The entire system is asynchronous and uses JMS heavily
- Clients post messages via JMS
- Messages are then routed via a routing service to the appropriate mailbox or directly for email processing
- Message delivery: either Pull (clients request their messages), or Push (e.g., sending emails)
- They use Spring, with proprietary LinkedIn Spring extensions. Use HTTP-RPC.
Scaling Techniques
- Functional partitioning: sent, received, archived, etc. [a.k.a. vertical partitioning]
- Class partitioning: Member mailboxes, guest mailboxes, corporate mailboxes
- Range partitioning: Member ID range; Email lexicographical range. [a.k.a. horizontal partitioning]
- Everything is asynchronous
Network Updates Service
The Network Updates Service is responsible for short-lived notifications , e.g. status updates from your contacts.
Initial Architecture (up to 2007)
- There are many services that can contain updates.
- Clients make separate requests to each service that can have updates: Questions, Profile Updates, etc.
- It took a long time to gather all the data.
In 2008 they created the Network Updates Service. The implementation went through several iterations:
Iteration 1
- Client makes just one request, to the NetworkUpdateService.
- NetworkUpdateService makes multiple requests to gather the data from all the services. These requests are made in parallel.
- The results are aggregated and returned to the client together.
- Pull-based architecture.
- They rolled out this new system to everyone at LinkedIn, which caused problems while the system was stabilizing. In hindsight, should have tried it out on a small subset of users first.
Iteration 2
- Push-based architecture: whenever events occur in the system, add them to the user’s "mailbox". When a client asks for updates, return the data that’s already waiting in the mailbox.
- Pros: reads are much quicker since the data is already available.
- Cons: might waste effort on moving around update data that will never be read. Requires more storage space.
- There is still post-processing of updates before returning them to the user. E.g.: collapse 10 updates from a user to 1.
- The updates are stored in CLOB’s: 1 CLOB per update-type per user (for a total of 15 CLOB’s per user).
- Incoming updates must be added to the CLOB. Use optimistic locking to avoid lock contention.
- They had set the CLOB size to 8 kb, which was too large and led to a lot of wasted space.
- Design note: instead of CLOB’s, LinkedIn could have created additional tables, one for each type of update. They said that they didn’t do this because of what they would have to do when updates expire: Had they created additional tables then they would have had to delete rows, and that’s very expensive.
- They used JMX to monitor and change the configuration in real-time. This was very helpful.
Iteration 3
- Goal: improve speed by reducing the number of CLOB updates, because CLOB updates are expensive.
- Added an overflow buffer: a VARCHAR(4000) column where data is added initially. When this column is full, dump it to the CLOB. This eliminated 90% of CLOB updates.
- Reduced the size of the updates.
[LinkedIn have had success in moving from a Pull architecture to a Push architecture. However, don't discount Pull architectures. Amazon, for example, use a Pull architecture. In A Conversation with Werner Vogels , Amazon's CTO, he said that when you visit the front page of Amazon they typically call more than 100 services in order to construct the page.]
The presentation ends with some tips about scaling. These are oldies but goodies:
- Can’t use just one database. Use many databases, partitioned horizontally and vertically.
- Because of partitioning, forget about referential integrity or cross-domain JOINs.
- Forget about 100% data integrity.
- At large scale, cost is a problem: hardware, databases, licenses, storage, power.
- Once you’re large, spammers and data-scrapers come a-knocking.
- Cache!
- Use asynchronous flows.
- Reporting and analytics are challenging; consider them up-front when designing the system.
- Expect the system to fail.
- Don’t underestimate your growth trajectory.
发表评论
-
大型网站架构不得不考虑的10个问题
2009-01-16 14:41 1134大型网站架构不得不考虑的10个问题 来自CSDN:http:/ ... -
规划 SOA 参考架构
2009-01-07 16:22 2455规划 SOA 参考架构 2007-12-03 09: ... -
架构师书单
2009-01-07 16:09 1702架构师书单 一、S ... -
架构师之路
2009-01-07 16:07 5098架构师之路 什么是软件架构师? 架构 ... -
应用架构选型讨论
2008-12-10 09:29 1183应用架构选型讨论(PPT) ... -
系统构架设计应考虑的因素
2008-11-24 17:23 3224系统构架设计应考虑的 ... -
负载均衡--大型在线系统实现的关键(服务器集群架构的设计与选择)
2008-11-24 17:19 5689负载均衡--大型在 ... -
eBay Architecture
2008-11-24 16:14 1533eBay Architecture Tue, 05/27/2 ... -
LiveJournal Architecture
2008-11-24 16:13 1068LiveJournal Architecture Mon, ... -
Google Architecture
2008-11-24 16:09 1218Google Architecture Sun, 11/23 ... -
YouTube Architecture
2008-11-24 16:07 1492YouTube Architecture Thu, 03/1 ... -
Flickr Architecture
2008-11-24 16:05 1276Flickr Architecture Wed, 11/14 ... -
Digg Architecture
2008-11-24 16:03 1270Digg Architecture Mon, 09/15/2 ... -
37signals Architecture
2008-11-24 16:02 117037signals Architecture Thu, 09 ... -
Scaling Twitter: Making Twitter 10000 Percent Fast
2008-11-24 15:59 1262Scaling Twitter: Making Twitter ... -
Amazon Architecture
2008-11-24 15:58 1167Amazon Architecture Tue, 09/18 ... -
Facebook 海量数据处理
2008-11-24 15:54 1816Facebook 海量数据处理 作者: F ... -
Scalability Best Practices: Lessons from eBay
2008-11-24 15:50 1103Scalability Best Practices: Le ... -
Yapache-Yahoo! Apache 的秘密
2008-11-24 02:15 1154Yapache-Yahoo! Apache 的秘密 作 ... -
Notes from Scaling MySQL - Up or Out
2008-11-24 02:14 1476Notes from Scaling MySQL - Up o ...
相关推荐
LinkedIn Communication Architecture Presentation 2.ppt
Android架构Android开发架构-LinkedIn学习课程此回购包含LinkedIn学习课程“ Android开发基础培训:体系结构”的练习文件。
该应用程序是使用视频游戏的api云制作的,用于解释Clean Architecture和MVP(模型-视图-演示器)的用法。 内置: 数据来源:适用于游戏《巨型炸弹》的Api 依赖注入: 库 查看注入:( 库 Http客户端: 库 图像...
领英Peoch Guillaume、Thevenoux Jean-Damien、Eddeghai Amine、Besacier Thibaut Project Architecture Ntiers 如何安装: 配置: JDK 1.8 月蚀月神玻璃鱼 4 在eclipse下导入为现有的maven项目.. 右键单击项目,Run...
Arguably, firms like Google, eBay, LinkedIn, and Facebook were built around big data from the beginning. They didn’t have to reconcile or integrate big data with more traditional sources of data and...
她是全球性信息架构组则The information architecture institute的创办人和首任主席,著名信息架构网站Boxes and Arrows的创办者。现任世界最大的专业人士社区网站LinkedIn的首席产品经理。此前曾经在Yahoo!任高级...
干净的建筑扑扑应用程序 由 , , 和Flutter sdk组成的干净的架构新闻应用程序。 主要目标是使用测试驱动设计风格的架构(受启发)来构建可读取,可维护,可测试和高质量的Flutter应用。... 奥马尔Gamliel - LinkedIn
You can find him at https://www.linkedin.com/in/scott-coulton-22864813. You can find him on Twitter at @scottcoulton and on GitHub at https://github.com/scotty-c. Table of Contents Chapter 1: ...
There is no better time to learn Spark than now. Spark has become one of the critical components in the big data stack because of ...architecture, and the various components inside the Apache Spark stack
Chapter 1 AWS Architecture Overview Chapter 2 Getting Started Chapter 3 Basic Instance Management Chapter 4 Elastic Block Storage Chapter 5 Virtual Private Cloud Chapter 6 Advanced Instance Management...
You will also understand how to migrate a monolithic application to the microservice architecture while keeping scalability and best practices in mind. Furthermore you will get into a few important ...
This book talks about Service Oriented Architecture and its main aspects
Expert site architecture with manual links and PageRank sculpting 251 Over-optimizing to improve ranking 253 Over-optimizing of on-page factors: Guidelines 253 Over-optimizing of inbound links: ...
屏幕截图作者 :bust_in_silhouette:优素福推特: : LinkedIn): 表示支持如果您喜欢或正在使用此项目来学习或开始您的解决方案,请给它一个 :star: 。谢谢!该自述文件是使用 :red_heart:通过
The compiler generates machine code for Intel compatible processors (IA-32/64 architecture) for platforms: Win32/Win64, Mac OS, iOS Simulator. The second script runner is a cross-platform ...