- 浏览: 832970 次
- 性别:
- 来自: lanzhou
文章分类
最新评论
-
liu346435400:
楼主讲了实话啊,中国程序员的现状,也是只见中国程序员拼死拼活的 ...
中国的程序员为什么这么辛苦 -
qw8226718:
国内ASP.NET下功能比较完善,优化比较好的Spacebui ...
国内外开源sns源码大全 -
dotjar:
敢问兰州的大哥,Prism 现在在12.04LTS上可用么?我 ...
最佳 Ubuntu 下 WebQQ 聊天体验 -
coralsea:
兄弟,卫星通信不是这么简单的,单向接收卫星广播信号不需要太大的 ...
Google 上网 -
txin0814:
我成功安装chrome frame后 在IE地址栏前加上cf: ...
IE中使用Google Chrome Frame运行HTML 5
There are a lot of differences between Linux version 2.4 and 2.6, so first we'll cover the tuning issues that are the same in both 2.4 and 2.6. To change TCP settings in, you add the entries below to the file /etc/sysctl.conf, and then run "sysctl -p". Like all operating systems, the default maximum Linux TCP buffer sizes are way too small. I suggest changing them to the following settings: You should also verify that the following are all set to the default value of 1 Note: you should leave tcp_mem alone. The defaults are fine. Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following: I've seen increases in bandwidth of up to 8x by doing this on some long, fast paths. This is only a good idea for Gigabit Ethernet connected hosts, and may have other side effects such as uneven sharing between multiple streams. Also, I've been told that for some network paths, using the Linux 'tc' (traffic control) system to pace traffic out of the host can help improve total throughput. Starting with Linux 2.4, Linux has implemented a sender-side autotuning mechanism, so that setting the optimal buffer size on the sender is not needed. This assumes you have set large buffers on the receive side, as the sending buffer will not grow beyond the size of the receive buffer. However, Linux 2.4 has some other strange behavior that one needs to be aware of. For example: The value for ssthresh for a given path is cached in the routing table. This means that if a connection has has a retransmission and reduces its window, then all connections to that host for the next 10 minutes will use a reduced window size, and not even try to increase its window. The only way to disable this behavior is to do the following before all new connections (you must be root): More information on various tuning parameters for Linux 2.4 are available in the Ipsysctl tutorial . Starting in Linux 2.6.7 (and back-ported to 2.4.27), linux includes alternative congestion control algorithms beside the traditional 'reno' algorithm. These are designed to recover quickly from packet loss on high-speed WANs. Linux 2.6 also includes and both send and receiver-side automatic buffer tuning (up to the maximum sizes specified above). There is also a setting to fix the ssthresh caching weirdness described above. There are a couple additional sysctl settings for 2.6: Starting with version 2.6.13, Linux supports pluggable congestion control algorithms . The congestion control algorithm used is set using the sysctl variable net.ipv4.tcp_congestion_control, which is set to cubic or reno by default, depending on which version of the 2.6 kernel you are using. To get a list of congestion control algorithms that are available in your kernel, run: The choice of congestion control options is selected when you build the kernel. The following are some of the options are available in the 2.6.23 kernel: For very long fast paths, I suggest trying cubic or htcp if reno is not is not performing as desired. To set this, do the following: More information on each of these algorithms and some results can be found here . More information on tuning parameters and defaults for Linux 2.6 are available in the file ip-sysctl.txt, which is part of the 2.6 source distribution. Warning on Large MTUs: If you have configured your Linux host to use 9K MTUs, but the connection is using 1500 byte packets, then you actually need 9/1.5 = 6 times more buffer space in order to fill the pipe. In fact some device drivers only allocate memory in power of two sizes, so you may even need 16/1.5 = 11 times more buffer space! And finally a warning for both 2.4 and 2.6: for very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, and you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but clearly limits your total throughput. Another solution is to disable SACK. If you are still running Linux 2.2, upgrade! If this is not possible, add the following to /etc/rc.d/rc.local # increase TCP max buffer size setable using setsockopt()
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
sysctl net.ipv4.tcp_window_scaling
sysctl net.ipv4.tcp_timestamps
sysctl net.ipv4.tcp_sack
ifconfig eth0 txqueuelen 1000
Linux 2.4
sysctl -w net.ipv4.route.flush=1
Linux 2.6
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
# for 10 GigE, use this
# net.core.netdev_max_backlog = 30000
sysctl net.ipv4.tcp_available_congestion_control
sysctl -w net.ipv4.tcp_congestion_control=htcp
Linux 2.2
echo 8388608 > /proc/sys/net/core/wmem_max
echo 8388608 > /proc/sys/net/core/rmem_max
echo 65536 > /proc/sys/net/core/rmem_default
echo 65536 > /proc/sys/net/core/wmem_default
发表评论
-
Fedora 13 Alpha 发布
2010-03-09 10:03 1029经过一周的延期过后,代号 Goddard 的 Fedora 1 ... -
超简单 无需光驱Windows下硬盘安装CentOS
2010-01-02 05:23 6209只要按照以下步骤做了,包你能把CentOS请回家:loveli ... -
Make Linux faster and lighter<4>
2009-11-05 15:54 811Speed up your software Almost ... -
Make Linux faster and lighter<3>
2009-11-05 15:54 779Boost your network You've twea ... -
Make Linux faster and lighter<2>
2009-11-05 15:53 584Optimise Gnome Along with KDE, ... -
Make Linux faster and lighter<1>
2009-11-05 15:52 992With just a few tweaks, your Li ... -
20 Free Best Linux Wallpapers
2009-11-05 15:51 1481Hey linux guys! Who said I fo ... -
The 10 Best Linux Distributions of 2009
2009-11-05 15:49 894It was exactly one year ago tod ... -
Linux虚拟化:10个让你不得不爱的理由
2009-10-31 15:04 706对于很多云技术供应商、虚拟软件 生产商以及大型IT公司来 ... -
CentOS 5.4 发布
2009-10-22 11:09 909CentOS Linux 是一个依 GPL 规范,及利用 Re ... -
Everything is Unix
2009-10-13 07:51 751Recently there has been some ch ... -
GNU项目发布Debugger 7.0
2009-10-10 09:55 805做为一款多种编程语言(如C,C++和Pascal)的调试器,G ... -
openSuse 11.2的最后beta版
2009-10-06 08:06 649openSUSE11.2发布了最后的beta版本。第一个最终候 ... -
Gemtoo Linux为10周年发布特别版的LiveDVD
2009-10-06 08:00 1034Gentoo的开发者宣布为了10周年纪念发布了特别版的Live ... -
Linux saves Aussie electrical grid
2009-10-06 07:44 770QUICK THINKING open sourcerers ... -
恢复LINUX的root密码
2009-10-04 08:43 980一. lilo 1. 在出现 lilo: 提示 ... -
Linux系统中Mysql 密码恢复
2009-10-04 08:43 931【IT168 专稿】Mysql隔一 ... -
戴尔的即时启动Linux主板:是在浪费时间?
2009-10-04 08:42 792本周早些的时候,我 ...
相关推荐
Tuning TCP under Linux.pdf
Linux 系统的最大进程数和最大文件打开数限制: vi /etc/security/limits.conf # 添加如下的行 * soft noproc 65535 * hard noproc 65525 * soft nofile 1000000 * hard nofile 1000000 说明:* 代表针对所有用户 ...
About tuning, I prefer to not fully disclose them because servers are targets of many attacks, so it's better not help hackers. The most touchy thing is the IP route cache : You have to tune it or ...
Although some background in the TCP/IP protocols is helpful, you can learn a great deal from this text about the protocols themselves and their uses. And if you already have a base knowledge of C, ...
Learn how to automate installation using Kickstart, set up print and Web servers, configure and secure networks and TCP/IP ports, and implement Linux virtualization. You'll also get details on ...
About tuning, I prefer to not fully disclose them because servers are targets of many attacks, so it’s better not help hackers. The most touchy thing is the IP route cache : You have to tune it ...
sysctl:LinuxBSD内核调整和网络安全强化优化,通过优化的sysctl调整提高了服务器系统的性能
<M> Connection tracking timeout tuning via Netlink <*> Netfilter Xtables support (required for ip_tables) *** Xtables combined modules *** (有很多项,在下面!) <*> "conntrack" connection ...
它是什么? 这个存储库是我收集的与低延迟交易和不同技术堆栈级别的性能调优相关的文档: :优化指南 硬件: 和调优指南 :Solarflare 网卡和 Open Onload 库和 Mellanox 文档 : , 一般的 调优指南 ...
2.3 Installing Go on a Linux system...............................................................................16 2.4 Installing Go on an OS X system..................................................
2.3 Installing Go on a Linux system...............................................................................16 2.4 Installing Go on an OS X system..................................................