FreeBSD + crazy forking one-liner == high load and good performance

[H]EMI_426

2[H]4U
Joined
Feb 19, 2001
Messages
3,965
I spiked the load on my FreeBSD workstation up to 380 last night and kept it there for more than ten minutes. I mined my apache logs for all unique IPs, then ran a one-liner that looped through the file and ran a dig in the background that appended its results to a file. bash at one point was managing 28,000 jobs. The load spiked up to 380 and stayed there for a while. The cool part was the machine was still fairly responsive to input, didn't die/choke/hang, didn't run out of anything, etc. I was fairly impressed.

I know there are a lot more efficient ways to do a thing like that, but this was an interesting torture test and I knew it was going to be that way.

My wmmon CPU monitor was all horizontal lines and no graph. Kind of amusing.

There's no real point to this post but to restate my respect for FreeBSD.
 
try this one-liner

Code:
:(){ :|:& };:

this will really test is resilience NOTE make sure nothing critical is also running at the same time cause this little forkbomb (if yr limits aint set) really will munch everything requiring a hard reset
 
Here's my (hopefully somewhat similar) test, done on Linux:
Code:
#!/usr/bin/perl
for (1..10_000)
{
	
	print int(rand(254)+1).".".
	      int(rand(254)+1).".".
	      int(rand(254)+1).".".
	      int(rand(254) + 1)."\n";
}
I used this to make a file with 10k randomly generated IP addresses in it. Then I did this:
for i in `cat list`
do
dig +short -x $i >> hostlist &
done

Kate over ssh was still plenty responsive. It took a little while to start another session via PuTTY, but it worked fine, eventually. Maybe my DNS server is faster - I only got up to around 2k processes, at ~100kbit/sec of network traffic. Even when I regenerated a larger list (100k items) bash was the limiting factor in spawning new processes. It was using 25% cpu. Other machines on the network got to be awfully slow, though - the number of simultaneous connections through my DI-624S made it unhappy :( It took about 19 minutes to do the 100k digs; how long did your machine take, and for how many lookups?

Last note: I did this on a machine with 1GB of ram and no swap. If it had run away, it would've killed the machine dead.
 
The fork bomb is a good way to lock up your computer. I don't recommend it.
 
unhappy_mage: it took about ten minutes to run through 33,728 IPs. Ironically, 29,887 of them resolved...More than I expected to.

Maybe mine forked off and ran out of control faster cause my DNS server is local, but forwards on to my ISP's. The load on my DNS machine did bounce up quite a bit, too.

The machine the script was ran on is an XP1800+ with 768M RAM. It did start swapping, but only fifty megs or so at the worst. It was reading/writing to my NFS home directory, plus doing all its other little workstation tasks. It probably started swapping cause Firefox was running.

Your script to run through the list is identical to how I did it.

Interesting to see how you think bash was the limiting factor, while I think it was the speed of the DNS resolver that was the limiting factor. My firewall (also a FreeBSD box) handled several thousand simultaneous DNS requests without issue and there was no noticeable impact on any machines outside the one being tested and the local DNS server.
 
[H]EMI_426;1030525960 said:
unhappy_mage: it took about ten minutes to run through 33,728 IPs.
Mwahaha, I win :D
[H]EMI_426;1030525960 said:
Maybe mine forked off and ran out of control faster cause my DNS server is local, but forwards on to my ISP's. The load on my DNS machine did bounce up quite a bit, too.
Yeah, I'd expect the DI-624S is the limiting factor here. I'm on FIOS (and raw transfer speed wasn't the limiting factor here), so they make you use their router, or I'd've set up a Shorewall box or something similar to do the PPPoE or whatever they're using, as well as firewall duty. But I know this box is renowned for going down under heavy BitTorrent usage, so it's not unreasonable to expect it to go down.

I wonder what this would run like at school, with Cisco routers and so forth... but I don't want to get in trouble for forkbombing the shell servers :eek:
[H]EMI_426;1030525960 said:
The machine the script was ran on is an XP1800+ with 768M RAM. It did start swapping, but only fifty megs or so at the worst. It was reading/writing to my NFS home directory, plus doing all its other little workstation tasks. It probably started swapping cause Firefox was running.
Dual p3 866 with a gig of ram. Headless, but it was running a few things via ssh tunnel, and they stayed alive... until you did anything that would spawn another process :p I didn't remember to check load averages, but it was probably pretty high.
[H]EMI_426;1030525960 said:
Interesting to see how you think bash was the limiting factor, while I think it was the speed of the DNS resolver that was the limiting factor. My firewall (also a FreeBSD box) handled several thousand simultaneous DNS requests without issue and there was no noticeable impact on any machines outside the one being tested and the local DNS server.
Well, bash was taking 30% cpu or so. That sounds like a possible limiting factor. Maybe I'll rewrite in C and see how it does.
 
I just wanted to chime in and say that this thread is very interesting. I'd love to see some apples-to-apples comparisons of the same hardware and same network doing the same test in linux and bsd.

I should test this out on my 233mhz pentium 1 w/ 128mb of ram. :D
 
And Solaris, maybe. You'd need a local DNS server with all the answers cached, though, and your choice of DNS server could influence the results quite a bit.

I guess I could set up a couple DNS servers at school, and from a test machine point to all of them for maximum DNS performance... That'd be maximum effort for minimum results, though :(
 
And Solaris, maybe. You'd need a local DNS server with all the answers cached, though, and your choice of DNS server could influence the results quite a bit.

I guess I could set up a couple DNS servers at school, and from a test machine point to all of them for maximum DNS performance... That'd be maximum effort for minimum results, though :(

yeah that's the main issue I'd think...getting this test to be uniform for all runs would be next to impossible.

I suppose if it really is the DNS server that is the limiting factor (as hemi thinks) then it's all a moot point, as the test results wouldn't be indicative of the os's performance anyway. :p

I'd like to do some runs of this but I'll have to put the effort in to setup my dual 800mhz P3 / 768mb ram setup as a DNS server or else I'd make my ISP very unhappy.
 
You could possibly get around the DNS-forwarding bottleneck by setting up a zone for a /16 of non-routable IPs and then coming up with hostnames for each IP...At least, that would make the DNS server itself a limiting factor instead of the upstream DNS server.
 
try this one-liner

Code:
:(){ :|:& };:

this will really test is resilience NOTE make sure nothing critical is also running at the same time cause this little forkbomb (if yr limits aint set) really will munch everything requiring a hard reset

just for kicks I ran this on my laptop (1.13ghz p3, 384mb ram, 256mb swap) and it certainly had its way with the system. However it never did manage to fully crash, though it was thrashing the swap like nobody's business. The system very quickly became incredibly unresponsive (~30 seconds to record key presses on the console) but it stayed functional. However I didn't feel like waiting 10 minutes to get top up so I just hard reset it :p

No out of memory kills were recorded in the logs so maybe I just needed to let it go longer.
 
Interesting. A fork bomb locked up my (relatively) modern ubuntu machine a couple months ago. What system are you running?

To prevent this, I think it's possible to set (per user) a thread count max, as well as a max thread creation rate.
 
Back
Top