Squid, FreeBSD Rock the House at Caching Bake-Off 159
Blue Lang writes: "Saw on the squid mailing list today that the results of the second polygraph Web-cache benchmarks are in, and squid on FreeBSD captured a few top marks, as well as performing exceptionally well overall. Interesting reading, especially as a comparison of free and open systems versus some very well-architected proprietary solutions."
Architecture (Score:1)
Re:One Word for You (Score:1)
Thank you.
Re:Performance is king. (Score:1)
The open source operating systems, on the other hand, really did shine, especially FreeBSD. Then again, this sort of network-heavy workload is a perfect fit for *BSD, so that aspect isn't surprising.
They should have tested conformance (Score:1)
Sure, netscape and most HTTP clients can connect to them, but try it with something that really works HTTP/1.1 features and it just sucks. So far, I have yet to see a single 'transparent' proxy that even implements HTTP pipelining, let alone the more advanced stuff from the spec :
IMHO, this is the most critical point when choosing a vendor - how well they implement HTTP/1.1 - very little else matters!
Re:Accelerate your website -- it's awesome! (Score:1)
Sure. If bandwidth costs more, and in some (non-north american) countries it's staggering.
Maybe Squid COULD use a little teeking (Score:1)
Where are squid's weaknesses, and what can be done to improve on them?
Squid uses the native filesystem of whatever OS it runs on. Could a better solution be that it doesn't use one at all. Let it write "raw" to partitions you assign it. We don't need permissions checks, don't give a damn about concurrency (one i/o thread per partition), don't care what happens when we crash (other than to know if the file is bad or good). How much could be stripped out and even optimized for this use.
There also seem to be some IP stack issues too. I know you don't want to be doing some of this stuff in user mode, but could squid benefit from jumping in lower down at the IP level and handling http 1.1 and relevant TCP bits it self?
I'm not a kernel hacker (obviously), and I don't even know if these are squid's weak points, but it seems a good place to start teaking.
Re:One Word for You (Score:2)
I hope this is a troll. Just in case it isn't, let's think good and hard about what khttpd does: it's an ultra-fast web server for static content served from the filesystem. It's not a caching product, and never will/can be. It's also marked EXPERIMENTAL and almost certainly will be for 2.4 as well. khttpd is a marginally cool idea (hmmm..whoa, look what I can do!!!) that very few people will be using any time soon. And it certainly won't be for caching.
FreeBSD & load (Score:2)
Maybe it's just configuration somewhere; both boxes are pretty much stock. However, it seems to me that I noticed something similar with macbsd and linux
a couple of years ago, when the macbsd box had a slight memory/cpu disadvantage. (However, if you tried to run lyx with the default postscript fonts and not using xfs, both came to a screeching halt
hawk
Re:FreeBSD & load (Score:2)
I suspect that at least a large part is the switch by debian from slink to potato; there have been various changes in default behavior, and the whole thing seems to have gotten *much* slower since the "upgrade". If I can get three hours free at a time that students don't need the server running, I'm ripping out debian for freebsd. I've reported it a couple of times in a couple of places, but have gotten no acknowledgement that anyone else has seen potato become a pig. THe problem is that I can't report anythign objectvively as a bug without spending a couple of days on an instal/reinstall/test cycle on a couple of different versions . .
However, the speed difference was there before with older versions of debian and freebsd. The offensive box only has 24mb . . .
Re:Rob - you learn anything from this article? (Score:4)
1.
2. Last I checked
3. Given (2) above and the highly dynamic nature of
4. As I said
As for
Re:Caching vs. Akamai-type services (Score:1)
Re:Rob - you learn anything from this article? (Score:2)
Re:My boss will love this article. (Score:1)
Just a quick question regarding the machines and the setting they're in:
A pair of identical (load balancing and transparent failover via BigIP) rackmount servers, each with a PIII 600 CPU, 256MB, 2940UW and 20Gb of disk. And let's not forget the triply-redundant T3's to threee distinct Tier-1 internet providers.Nice setup. What do P3's get you that P2's or even celerons don't? The extra cache won't help a whole hell of a lot and the SIMD or KNI instructions don't do anything for you, either...
...Now I heard that somewhere there is a patch to the Linux kernel that uses MMX to help calculate the packet checksums faster but you said you don't use Linux.
Oops (Score:2)
Anyone got any good stories about using Oops?
Re:It didn't win. (not flamebait!) (Score:3)
I don't think Squid won (Score:2)
These results are surprising to me - I would have thought that the use of commodity hardware and no-cost software would have created a compelling price advantage. What happened?
If there's something I'm missing, could someone please spell it out for me?
DOS port (Score:2)
nWo for life!
------------
a funny comment: 1 karma
an insightful comment: 1 karma
a good old-fashioned flame: priceless
Re:What in the fuck ... (Score:2)
Novell's BorderManager? Dunno.
Novell's Internet Caching System? See the Vendor Comments page [ircache.net] on the bakeoff site, where somebody from Dell says:
There were other boxes running it as well, e.g. at least some, perhaps all, of the IBM boxes.
(The "Vendor Comments" section seems to be filled primarily with "Vendor Advertisements"; yes, my employer, NetApp, proudly participated in the marketoonery in question.)
Re:What in the fuck ... (Score:2)
"The marketoonery in question" being the dumping of advertisements into the "comments" section, not Polygraph itself. Marketoons - can't live with them, can't send them out the airlock without a suit....
Re:Misleading title for article (Score:2)
Or, rather, for FreeBSD plus whatever caching code iMimic runs atop it.
Re:Squid and Akamai (Score:3)
The CTO of Akamai is Daniel Lewin; his bio page at Akamai [akamai.com] says nothing about Squid.
You may, perhaps, be thinking of Peter Danzig, who is the VP of Technology at Akamai; his bio page at Akamai [akamai.com] says:
I think the Squid project was originally derived from the Harvest cache; the NetApp NetCache software was also originally Harvest-derived, although much, perhaps most, of it was done at Internet Middleware (a company founded by Peter and bought by NetApp) and NetApp. (I suspect much of Squid might also be non-Harvest code.)
Re:Performance is king. (Score:1)
I agree comparing to two is a bad idea
performance is king
Remeber NT? (Score:1)
Now, why oh why would that be a bad idea?
The folks at Microsoft thought it would be cool to move the GDI into the kernel for faster graphics under NT. Now that NT crashes, the blame falls on "unstable video drivers" instead of the system architecture.
No thanks. Our Pentium 166 can saturate the T1s already.
The wheel is turning but the hamster is dead.
Re:Accelerate your website -- it's awesome! (Score:1)
Re:suprised by this ... (Score:1)
Other software based on Squid (Score:2)
Re:My boss will love this article. (Score:1)
"... That probably would have sounded more commanding if I wasn't wearing my yummy sushi pajamas..."
-Buffy Summers
Goodbye Iowa
Performance is king. (Score:1)
Squid, on the other hand, is a good compliment if your yearly IT budget is smaller than marketing's christmas party funding... but for serious stuff?
Sorry, but Squid didn't cut it here - I know, we all want the open source crew to win, but hey.. it just didn't happen here.
Re:It didn't win. (not flamebait!) (Score:1)
Re:It didn't win. (not flamebait!) (Score:1)
Re:Performance is king. (Score:1)
I'll agree Squid can do some serious work, but it didn't fare well against the other solutions presented in this benchmark. :(
Re:DOS port (Score:1)
Re:It didn't win. (not flamebait!) (Score:1)
Now, I'll say it again alittle more succinctly: Squid got squished.
Re:It didn't win. (not flamebait!) (Score:2)
Windows delivers the scalability and reliability to run real businesses-now.
Opinion.
Feature for feature, Windows 2000 is the most cost-effective business platform.
Opinion.
Microsoft wants to work with you to make your business successful on the Internet.
Fact.
Some of the biggest e-businesses and dot coms run on Windows.
Fact.
Dell, the largest e-business on the Internet, runs on Windows.
Fact.
Sun claims to be a leader in system reliability and more reliable than Windows.
Fair enough, they do claim to...
Electrolux Group, Accounting.com, Pro2Net and thousands of other companies have switched their Web sites from Sun platforms to Windows. (Source: Netcraft)
Fact.
The vast majority of Sun?s Solaris shipments are on Sun?s own expensive, proprietary hardware and Sun has always buried the cost of Solaris in their hardware pricing.
Opinion.
Conclusion: Windows is useful in some environments. So is everything else. I care about numbers, data, real, tangible, and reproducable things. If an NT server in X configuration crashes 35 times in and has an average downtime of 5%, while a linux box in X configuration with similar performance has a downtime of 1%.. linux wins. Conversely, if the NT box can pump out 8000 hits/s, while the linux box can manage 2100 hits/s and I need raw performance, NT wins. Stop reading the marketing hype and start reading the technical specifications.
Re:My boss will love this article. (Score:2)
I can't tell whether you meant this as alittle FUD thrown over linux, or because you believed all the other vendors there were inferior to FreeBSD. On one count you'd be wrong, unfortunately.
Yes, you can rely on FreeBSD. You can rely on NT too for certain things. That doesn't say much. I'd also like to point out that there are very serious holy wars out there over whether linux is superior to FreeBSD along with the general consensus in the linux camp that they will catch up (if they haven't already) with the BSDs in short order. The evidence is inconclusive..
Lastly.. about that "killer caching proxy"... umm, with all that bandwidth, why would you need proxying anyway? by that time you're probably a backbone provider and don't need to worry about stuff like that. Caches are used by ISPs with a T1 or two or corporations to limit bandwidth.. not by super-sized ISPs (not generally - AOL comes to mind as an exception). And why the 2940UW (I'm assuming you're thinking adaptec)? They have Ultra160 fibre now in the AIC-78xx chipsets which is register-compatible with the aic78xx module for linux... or for the *BSDs.
Re:My boss will love this article. (Score:2)
It didn't win. (not flamebait!) (Score:4)
No offense, but you call that winning? It lost to it's competitors categorically and across the board - hits, latency, cost/performance.. what's the good news? Anyone?
Re:Remeber NT? (Score:2)
Szo
Re:Architecture of Caching to large-scale sites (Score:1)
Re:It didn't win. (not flamebait!) (Score:1)
Re:It didn't win. (not flamebait!) (Score:1)
It's important to dig into the numbers.. (Score:1)
Cache the world!
--
blue
Re:It didn't win. (not flamebait!) (Score:2)
and across the board - hits, latency, cost/performance.. what's the good
news? Anyone?
Hi.
If you'd kindly point your browser back to the top of the screen, you might take a moment to re-read the post. Squid+FBSD did well. The ICS-based solutions cost bazillions of bux0rs and brought along 100+GB disk array, and, pound for pound, were not that much better. The microbits entry did rock, and it's about the size of a personal pan pizza.
there's a reason i posted that it's important to READ the ARTICLE, not just grab the first table you see and start wallowing about.
:)
--
blue
Re:TTchorus: site gets slashdotted, why not cache (Score:1)
> Tangenital
ick
{chuckle}
To answer your question, there are two main reasons why this shouldn't be done.
1: Copyright could be infringed on the pages being cached
Okay, that makes a certain amount of sense and I can understand being cautious, but caching makes the web go around. It's already pervasive, or so I was given to understand. As another poster mentioned, what about Google?
I guess the size of the financial club is often more relevant than the technical legality of a something. {sigh}
2: Many sites get their revenue from click throughs and banner ads. If
Wouldn't bother me personally, I never seem 'em anyway, I use Junkbuster or turn autoloading images off. It's amazing how much more responsive surfing is then.
-matt
Re:TTchorus: site gets slashdotted, why not cache (Score:1)
For onlookers: The link is here [slashdot.org].
And it doesn't say anything about copyright, just time and money (isn't it always?).
-matt
Re:Accelerate your website -- it's awesome! (Score:1)
thanks
-matt
thanks (Score:1)
TTchorus: site gets slashdotted, why not cache it? (Score:2)
At least once per story, somebody suggests that slashdot cache or mirror sites they link to in order to avoid the dreaded
I have yet to hear an explanation of why this might not be a good idea. Anybody out there have one?
(honest question)
Re:/.tted--copyright not issue (Score:2)
I can see that argument if you surrounded their page in a frame, or replaced their banners with yours, or something which somehow makes it unclear who the real owner/producer of the page is.
How about a script or program which:
-caches the linked page when the story is first posted
-periodically checks the page for response time
-if $lag > $unbearable then serve cached page with an inserted headline which says "the host server http://blahblah appears to be
This way the big companies would host their own material (the assumption being they have enough money to have bigass servers and don't need to be cached) and only the little guy with the cool make-your-own-transmeta-chip page who actually _needs_ to get cached, will get cached.
Is there some reason this wouldn't work?
-matt
Why the BSD vs Linux flames? (Score:5)
Out of sheer curiousity I tried out freeBSD. Their kernel is incredible. I know that the bench marks aren't there to show it but their "claims" are true.
Their TCP/IP stack is better, loads can be handled with ease even on a extremely low-end systems and their memory management is out of this world. I was impressed at how fast my shitty unix boxes went.
Now I know that linux heads like myself would become defensive but linux has made big improvements and a lot of issues are being addressed with the next 2.4 kernel. Their "claims" will be seriously tested soon.
I have decided to go back to linux because I prefer it. There's more software and it makes a better desktop for me. Plus it is stable enough, user friendly enough, fast enough and damn good!
However, freeBSD is a great unix OS and the only way to find out is to try any BSD yourself. Even a linux head like me can defend freeBSD.
Keep up the good work to all BSD contributers
Re:Accelerate your website -- it's awesome! (Score:2)
It's useful in scenarios where you have a large web server farm. By implementing reverse caching and lightening the load on your web server farm, you don't have to have quite so many web servers. It also has the net effect of making your web site appear to be "faster" since users will see the images more quickly from the cache than the web server.
Re:DOS port (Score:1)
My experiences with Squid (Score:5)
We have roughly 100 machines on our network, and Internet access was coming to a standstill - especially when everyone in the computer lab was on the Internet. Imagine a 128Kb/s fractional T1 with 25 *active* users all trying to look at mega-image-rich content, plus some other users on campus accessing the Internet at the same time (can you say sub-300 baud and ping times measured in whole-second increments?). I was having to pre-load web sites before a class came into the computer lab because just loading the first page could take roughly five minutes on a good day.
Then I configured and installed a Squid server on a rejuvinated Compaq Deskpro running Linux 2.2 that was donated with the above said specs. I was a little sketchy to implement it across the entire campus at first because I had always heard that proxy servers were a Bad Thing. So I silently pointed browsers to the Squid machine in a few classrooms to see if I would hear anything from anyone. I got calls from people that very day. They were asking me how I had finally coaxed our school district into buying us such a fast connection!
As it goes, the more classrooms I pointed to the proxy server, the faster things got (as the cache was growing and the hit rate was increasing), and the more happy teachers I had. In a school situation, many sites are visited multiple times by different students and classrooms. In the computer lab, every computer often visits the same site as a class. So having a caching-proxy server helps a great deal! I really believe that every school with less than a T1 should have one.
As for statistics, I have an average 'hit' rate of well over 80% because of the multiple viewings of sites. Initially I had 2GB set aside for caching purposes (on an IDE Samsung 2.1GB drive), and I found that as it reached its capacity the server just got way too slow. So first I brought it down to 1.5GB, and now I have it at 1GB (I may even take it to 750MB). It has been running pretty fast at 1GB - by far compared to not having a caching-proxy server at all, but I do see the performance start to degrade at about 750MB with my particular hardware.
Sure, faster server hardware would be *great* and is probably necessary to handle our unusually heavy load due to all of the graphics content on the visited sites, but right now that just isn't an option because we live on donations. My point is that even though we are running Squid on such a crappy box, it has worked wonders on our network. Internet access seems very fast now, whereas before it was almost unbareable. And most importantly people are happy and making use of the technology we have to its fullest extent, where as before they may not have been able to do this. I must admit though that I am writing grants in hopes of getting a faster/newer box because ours is getting tired and I worry about what will happen when the hardware finally kicks the bucket.
For a school in our situation, Squid is great because it even helps when you're using it on otherwise possibly worthless hardware, and the price is just right.
Anyways, I'd like to thank all who have donated their time on the Squid project, you've done great work and you're helping people more than you realize!
--SONET
http://www.hbcsd.k12.ca.us/peterson/technology
Re:Why the BSD vs Linux flames? (Score:1)
Re:Architecture of Caching to large-scale sites (Score:1)
But still, it's a good matter of principle to do so, to guarantee behaviour of clients.
Also, this only extends to private caches, and public caches really like to see http/1.1 headers before holding on to them (again, depends on the exact cache obviously)
Architecture of Caching to large-scale sites (Score:3)
The technical report can be found at http://www.netapp.com/tech_library/307 1.html [netapp.com]
We would all save a scary amount of bandwidth if more sites were designed with public caches such as (the awesome) squid in mind, and it's a really simple use of headers that make it possible.
For those who use Apache and are interested in making your own sites more cache-friendly, I recommend you look at mod_expires [apache.org], which is part of the default distribution of Apache, although not compiled in by default. If you have large, static images that rarely change, then go ahead and put week-month-year long expiry headers on them, and watch the hits for those redundant images drop right down on your web server. And if you suddenly need to change them, then it's no real problem, as all you have to do is change the images URL and it will become a "new" entity for purposes of caching.
Yeah, granted, bandwidth is getting cheaper now, but for us poor Europeans, it's still a scarce commodity and we need to worry about these things
-anil-
Re:Accelerate your website -- it's awesome! (Score:1)
Re:TTchorus: site gets slashdotted, why not cache (Score:2)
ick
To answer your question, there are two main reasons why this shouldn't be done.
1: Copyright could be infringed on the pages being cached
2: Many sites get their revenue from click throughs and banner ads. If
dave
Re:My boss will love this article. (Score:2)
by that time you're probably a backbone provider and don't need to worry about stuff like that. Caches are used by
ISPs with a T1 or two or corporations to limit bandwidth.. not by super-sized ISPs
Uhm... you're kidding, right?
Did you ever think about how much of that bandwith your high speed clients (DSL, cable modem) can eat up? And how much of it is redundant? (i.e. cacheable)
Re:Accelerate your website -- it's awesome! (Score:3)
well-tuned operating systems that eliminate traditional OS overhead for these numbers.
True, but the operating system that Squid was running on (and that's what you were talking about, the operating systems) was FreeBSD, which also runs the iMimic, which captured the highest hits/sec and reqs/sec per $1000. By a large margin. Interestingly enough, the only linux-based entry, the Swell-1000, didn't do very well. Which goes to show you that just because you have a good starting point, doesn't guarantee success.
And, of course, the amazingly expensive Cisco products probably (I don't know, just assuming) do a lot more than just cache -- and are probably a lot more reliable (MTBF) and redundant, which is important if your cache is a vital business component. (And if cache == internet access, then, well, it probably is).
Re:Cached idea, stale opinions. (Score:1)
Yup, and if you can't provide better arguments, and back them up with reason, then stay anonymous, Coward. Or better yet stay quiet.
Re:TTchorus: site gets slashdotted, why not cache (Score:1)
Re:Accelerate your website -- it's awesome! (Score:3)
A connection is destined for "www.excitestores.com", and ends up at the external DS/3 (T3, T1, insert your fast link here) port on our router. The router runs a rule against the packet and says "Hey, this is www traffic bound for the servers that are to be accelerated. Therefore my next hop is (insert IP address of cache here)!". It route-maps it to the cache server as it's next hop. The caching server is set up to "hijack" any incoming connections as if they are destined for itself, and makes the request to the origin web server on behalf of the requesting client. At this point, this does not differ too much from standard forward transparent proxying, except that you normally have an access control list that only permits transparent proxying of a limited set of URL's or IP addresses. You don't want to run an "open proxy" for the world to use to cache whatever they want.
Of course, note here that there are alternate methods of accelerating sites depending on the cache you choose and your infrastructure. The basic idea is to get the packets to your cache instead of the web server, however you choose to do it. Common methods include placing the cache in the natural route of the packets, making the webserver address point to the cache and have a non-public DNS that the cache looks to to resolve a web site on a non-routeable private network, or specifying on the cache that incoming connections on a certain IP are to accelerate a particular origin server.
Anyway, the benefits of this are enormous in our case. We have a (*&$load of modules compiled into our Apache server, tons of virtual hosts and modules to handle them all, and each daemon runs about 12 MB. Each web server has a gigabyte of RAM, therefore you do the math:
1024/12=85 and 1/3 connections run us out of physical RAM on each web server. Realize this is a rough estimate; our web servers can handle much more, but performance degrades quickly with more connections being served from virtual memory. I've also not taken into account OS overhead, other services running on the servers, and any other thing you may think of. However, modem users, particularly, saturate web server connections because it is so slow to deliver objects to them.
CNN.com, for instance, uses ICS caching boxes purely for connection management to handle these slower connections that could bog their servers down. Novell's ICS is rated at over 100,000 simultaneous connections on each box in reverse proxy mode. A big difference from 85 connections for one machine, no?
I'd love to discuss this in more depth, if you require a better answer. Better yet, check the FAQ at Squid's site [squid-cache.org] regarding transparent reverse proxying.
Seriously, this is what takes web sites to the next level, regardless of whether you use Squid, ICS, NetCache, or another type of reverse cache. Keep smiling!
Accelerate your website -- it's awesome! (Score:5)
It's really important to note that IRCache has no desire to point to any "winner" in this bakeoff, but instead to have real non-partisan numbers to point to when evaluating cache performance. Squid captured top honors in cache hit ratios, but nothing else (AFAICT), showing that those "expensive, proprietary systems" also can be very well-tuned operating systems that eliminate traditional OS overhead for these numbers.
One of the frequently overlooked uses of cache is as a web site accelerator, instead of the standard forward proxy. Using a few simple access control lists and a policy on a router, reverse-proxy caches managed to reduce the instantaneous load on our web servers by up to 94%. We serve about 3.5 million hits a day. A "reverse proxy" is an EXCELLENT use of a proxy cache, and after these technology evaluations I've been involved with in past weeks I'd recommend it to anybody considering running a high-traffic website. This allows your Apache servers to function more as the "cgi engine" of your site, and lets the static images, text, banners, etc. be delivered from a box that can handle 100,000 simultaneous connections. Very cool.
While I'm not allowed to post a "review" of any one of these units, because of various agreements for the evaluation boxes we tested, I can clearly state that Squid, NetCache, and ICS-based systems can and will vastly reduce infrastructure scalability costs for businesses when deployed in a reverse-proxy configuration. Our earlier estimates guessed we'd need to expand our web farm three times to handle our estimated load by the end of the year. Now we can reliably predict that our farm can serve 10 times the amount of hits we're running now by using a cache as an accelerator. VERY cool stuff.
Be sure and check out the system configurations in the bakeoff review. It's very illustrative that the boxes tested have VERY specific audiences. Don't be fooled by the "fastest hit response time" or "most throughput" -- you can spend $6,000 or $150,000 for any setup, depending on your needs.
Noticeably absent from the review was Inktomi, for the second year in a row. I'm hearing FUD from vendors that their performance isn't up to snuff-- any truth to these rumors?
Re:Wow! Look at that ICS stuff! (Score:1)
ICS is designed for people who don't know NetWare--it's a NetWare kernel with the ICS stuff on top of it. Say what you want about the NetWare file system, it's pretty fast when tuned for stuff like this.
Re:Can anyone explain that to me? (Score:1)
On a side note, Google is going the whole nine yards embracing BSD- they are considering setting up a BSD-specific search engine, not unlike their current linux engine. (I've talked to a guy from google about this, more at http://daily.daemonnews.org/view_story.php3?story
So, they aren't really a linux or a FreeBSD camp. Currently, they are both.
Re:FreeBSD & load (Score:2)
FreeBSD's VM system has been tweaked, fiddled with, and rewritten for 4.0 (by Matt Dillon) for efficent swapping. It swaps out idle pages when there is free I/O even if there is more physical ram available- so if a sudden demand for pages to be ran came up, it could easily kill one of the pages in ram as a copy exists on the swap, and then create another- so you aren't too stressed and swap out like mad when you need it.
I don't think Linux has preemptive swapping, and if it does it is new, and I'm doubtful that it is as mature.
Try putting the boxes under serious load and try again.
Very true (Score:1)
Hence, making khttpd a non static web server would be a very foolish thing to do, and would mangle system stability.
Static web serving is not problem (once you debug the code).
Re:Very true (Score:1)
>Nothing is a problem once you debug the code.
Oh, right. So all programming tasks are of equal difficulty then?
Perhaps we *should* be using a microkernel, but until we are, anything that goes into the kernel should be very robust. There's no reason why a static http server can't be robust, and there's every reason why a dynamic one (or an entire graphics subsystem, as per NT) is going to cause huge headaches.
Re:I don't think Squid won (Score:1)
Most of the top winners used Novell's Internet Caching System, a customized version of Netware specifically designed for caching applicances. What really gives ICS a lot of speed is COS, the Cache Object Store, which is a specialized file system designed for caching. Much of the overhead of a traditional file system (file integrity checks, etc...) aren't required for caching.
In addition, Netware is an awesome network traffic processor. We don't use the same threading model as *nixes. So, a fast file system and a fast network response = an awesome caching appliance.
No slam on BSD, they are doing great with generic software, but because of the way Netware is architected, we're just faster.
-Todd
Re:My boss will love this article. (Score:1)
Have a look at the man page for the FreeBSD mount command [freebsd.org] and say that (search for async while there). For those occasions when you don't mind flying by the seat of your pants, you can of course have async writes. It isn't a new option either. And there's always softupdates too.
I suppose it wouldn't be so easy to win arguments if people actually checked their assumptions...
Re:Oh wow, John Carmack talked, everyone be quiet. (Score:2)
Somehow, a moderator decided that:
"Nothing is a problem once you debug the code."
A true, if somewhat tongue in cheek comment, is being a troll, but:
"Go work on your games or something."
ISN'T!
This reminds me of a chemistry class I once took.
"Nothing is a problem once you've got the balanced reaction equation."
"Go read a book or something."
Why do the moderators (or perhaps this one moderator) sanction trolling behaviour, and dump on a genuine statement?
See you all metamoderating...
---
Wow! Look at that ICS stuff! (Score:1)
Novell's ICS did pretty damn well! I still remember when ICS was part of BorderManager. It showed very good potential, and incredibly flexible. You can't find such a configurable caching solution anywhere!
Now, I believe ICS has stripped down conffigurability, but upped the performane.
Good job, Novell!
Re:What in the fuck ... (Score:2)
Look at the performance the Comaq box is sporting! It's purely amazing!! Obviously, Compaq's many years of cooperation with Novell, and the many NetWare drivers they have developed, helped them for the ICS appliance, too. Let me remind you that ICS actually runs on NetWare (but without NDS).
Copyright problems was one of them (Score:2)
Re:/.tted--copyright not issue (Score:2)
Just because Google hasn't been sued doesn't mean copyright is not the issue, although it would be interesting to see someone try that one in court
"Well how can I be at fault? Nobody has sued Google yet!?"
Re:FreeBSD & load (Score:1)
I find it hard to believe that linux can be so slow. I have run linux on even slower boxes, for e.g., a 486DX100 with 48MB RAM. Even running X and KDE, the response was nowhere near as slow as that you claim to have experienced. Not that I am doubting your words, but I would seriously look to see if there is anything wrong with your hardware or configuration.
I personally have not found any difference between Linux and FreeBSD, at least as a desktop OS. They both respond with indistinguishable speeds. They are both very stable. I have not used either as a server, but have many friends who have, though not under extreme conditions.
That being the case, I think that FreeBSD's perceived superiority is a myth. I feel that, for all practical purposes, you can use whichever you prefer with no performance penalty whatsoever.
If anyone has any pointers to studies that stack up a recent linux kernel against a recent FreeBSD kernel and prove or disprove my belief, I would love to see them. Thanks very much.
Hari.
Re:PFFT, so What (Score:1)
If someone posted:
Microsoft rules over Linux.
or perhaps
FreeBSD rules over Linux.
Would you consider that a 'dissenting opinion and the truth'.
If you care about shipping and buyable voice to text systems, then yes M$ rules over Linux.
If you think the GPL licence is a bad licence, and the BSD licence is better, then yes, FreeBSD is the OpenSource ruler over Linux.
In the case of these 2 examples, Linux is the loser.
Rather than spending your time whining about moderation, why don't you spend your time writing some code, or at least work on extracting you head from your ass.
[Net]BSD+LFS (Score:1)
I always wanted to give LFS a try on our production webcache (squid as well) since I've read some documents about it---too bad that LFS isn't matured enough yet in -current and hence probably won't be in 1.5 either:-(
Re:One Word for You (Score:1)
Oh, you do! (Score:1)
Too bad that you are already using BSD without
knowing it:
1. Parts of BSD are built into nearly every other OS witch supports the internet protocols: Windows, Linux, Solaris, BeOS...
The "Sockets" interface to network protocols that all those OSes offer is a BSD-developement
2. Many, many Routers run on BSD derived systems
3. Many Nameservers run on BSD systems, the Berkeley Internet Name Daemon aka BIND has spun of BSD.
4. Some of the pr0n-server you've visited yesterday run on BSD
5.
It's absolutely impossible to use the internet without using BSD.
It's absolutely no problem to use the internet without touching Microsoft or Linux.
Incredible Funny! (Score:1)
a) +1 Insightfull
b) -1 Troll
ROFLMAOPIMPTIME
Hey, trollking, a little tip for you how to
get a "+5, Informative:"
Next time try:
"I would just like to voice my support for Linux. It is the best OS ever, in my humle opinion."
Can anyone explain that to me? (Score:1)
% grep -i "who\|rules\|world" index.html|wc -l
0
Then I've downloaded the cached page from google as well:
% grep -i "who\|rules\|world" cache.html |wc
0
OK, "who" is ignored, says google. But that page doesn't contain *any* of the search keys except "the".
I don't understand it, why does google give such bogus results?
(AFAIK Google is linux powerd so this can't be a FreeBSD conspiracy
Why? (Score:1)
Actually, I chose OpenBSD over FreeBSD (or any other OS), for the same reason I chose the 2940UW over another SCSI chipset-
It may not be the latest and greatest cool technology, it may not be the fastest, but I know, from personal experience, that I can rely on it.
The drives and controllers are relatively inexpensive, so I can afford to keep spares on hand, and when the current solution becomes overloaded I can easily scale it up.
More detail on this in the message 'Distributed proxies' elsewhere in the thread.
My boss will love this article. (Score:3)
When I spec'd it out, all the techies I talked to asked me three questions, this article validates my answers to all three-
My answer to each was two parts:
Semi-Off-Topic
What do I mean by a 'Killer caching proxy'?
A pair of identical (load balancing and transparent failover via BigIP) rackmount servers, each with a PIII 600 CPU, 256MB, 2940UW and 20Gb of disk. And let's not forget the triply-redundant T3's to threee distinct Tier-1 internet providers.
All this just so I can read slashdot.
Distributed caches with 'proxy.pac' (Score:3)
I've done a lot of work with 'proxy.pac [squid-cache.org]' files in the last year- it's amazing how much decision-making power you can put into the autoproxy script, letting the client machine take on some of the responsibilities of smart proxying.
For example, right now I have two distinct sites with their own Squid proxies, users at both sites use identical 'proxy.pac' files. The browser decides whether to go direct or via a proxy based on the host/domain of the destination, then chooses a proxy based on it's own source IP address.
This means that every Netscape and IE browser in the enterpise has the same configuration, and even roaming users will always get their closest proxy server each time they connect.
If a business unit later gets their own internet firewall and proxy, it takes a line or two in the global script, and clients automagically use the new proxy.
You can also specify multiple proxies in the file- if the first one times out, all future requests (until the browser is restarted) will go to the next server in the list.
Now if only Lynx would parse the (javascript) proxy.pac file...
Where exactly does Netcraft say that? (Score:2)
Thank you
Re:Architecture (Score:1)
The async io compile option allows what you've described. And it does wonders for performance. The Swell results from the bakeoff compared to our current results published on our page are a good example of this (we rated 77 req/sec at the bakeoff...we're testing 110 req/sec on less hardware in the lab right now). The difference was that we were experiencing problems (system lockups) with async io compiles at the bakeoff, so to simply get a solid run in we ran without async...performance was less than ideal because of it.
Those issues have been resolved, and the async io builds are significantly faster.
Re:My boss will love this article. (Score:1)
BSD is great for Squid because of the excellent stack and reliability (and it's the platform of choice for Duane and most other leads developers), but Linux is better if you want performance. Async IO is only available under Linux and Solaris and it makes a HUGE difference (look at the Swell results at the bakeoff without threads and our results more recently with threads--77 reqs/sec at the bakeoff, 110 running in our labs now--from a $2139 dual IDE drive box! Performance is on the Linux side).
Recent benchmarks of a very tweaked linux/squid box are posted at the swell technology website doing 100 req/sec.
Most WERE above average... (Score:1)
To find the below average vendors you'll need to look to the folks who didn't show up this time.
While I have an obvious interest in promoting the Swell entry (which did quite well...but not as well as we expected due to some unresolved bugs), I spent a lot of time talking to the other vendors techs. There were some very smart people pushing very good products at this bake-off. I wouldn't hesitate to recommend (for customers that need more than our boxes provide and can spend the extra dough, of course) many of the products tested.
I know it sounds rather flowery to say that "All the girls in the pageant were very pretty", but in this case, I think the bake-offs are really separating the chafe from the wheat. Look at previous bakeoff numbers and prices and compare to this time around. Keeping in mind that this bake-offs workload was MUCH harder than previous workloads (the Polymix-1 or Datacomm-1 workloads found in previous comparisons), even so the price performance has improved markedly from all vendors. And the price/performance also-rans from previous events just didn't show up this time.
That's why the polygraph guys deserve such praise. They allow cache users to really know what they are buying. And the companies that don't show up or don't provide a good value just won't sell as many boxes. (And I strongly recommend against buying an untested cache product...there are some real stinkers out there and you don't always get what you pay for.)
Re:My boss will love this article. (Score:1)
But squid currently performs better under Linux, when proper tweaks are made. If you can make a BSD box do 110 reqs/sec from a K6 and 2 IDE hard disks with a stable version of squid, then I'd love to hear about it. It simply isn't going to happen currently. It's not even easy to do with Linux.
When the new squid filesystems come online, all the rules may change. But currently linux is the speed king for Squid.
Look at the Swell entry, instead... (Score:2)
You should further look a little deeper into the results. Microbits box was only caching about 44% of web traffic and getting rather slow response times. So while they got 120 reqs/sec, no sysadmin in their right mind would push that box that hard. To compare apples to apples with the Squid entry or the Swell entry (both had nearly ideal cacheability and excellent response times) you should think of the Microbits box as being more along the lines of 95 or 100 reqs/sec.
To see Squid results in more favorable light, check out the more recent results on the Swell web page:
http://www.swelltech.com [swelltech.com]
Our test box at the bake off was having fits using async io...so we disabled it in order to get a clean run. However, performance suffers markedly without it. Those async issues have been resolved...Our boxes are running in our labs at 110 reqs/sec right now (we have a 100 reqs/sec run benchmark online...you can note that response of squid is still excellent at that load).
Anyway, given the proper tweaks, Squid can really scream on a low priced box. (Our $2139 unit is the one included in the bakeoff and our more recent benchmarks.)
Look a little closer at the numbers people! (Score:4)
Squid showed perfect cacheability (why buy a cache except to cache?), whereas some others in it's price range (except the Swell box also running squid) displayed much lower cacheability. Response times from a lot of boxes were not so good either, while squid's was excellent (the other reason to cache...browsing speed). When you see a box with long response times and low cache hit rate, you are looking at a box that was being pushed WAY too hard. You would not run a cache with 30 or 40% DHR and mean response times of 2 seconds...ideally, you run it such that cacheability is near perfect and response times are very very fast. Squid did that. Microbits didn't.
The Squid team have done a great job with Squid, and it gets better every time around. Even compared to the ICS products (many of which are very very fast these days...but you pay the price for them...ICS on low end boxes suffers a bit), Squid didn't do so bad at all.
Anyway, if you'd like to see some more Squid numbers, we've got a $2139 squid box in the lab doing 110 reqs/sec from dual IDE drives, whereas the Squid team got 160 from a $4k box with 6 SCSI 10k drives. We will be posting pretty specific specs for it sometime in the future so that others who want to roll their own can do so (it takes a lot of work). Some of our recent benchmarks (using Bake-off rules and benches) are posted on the Swell Technology web page. Currently, the posted benches are for a run at 100 reqs/sec. The 110 run will be posted sometime soon.
Those interested in caching should check out the squid devel list lately. Discussion has centered on a couple of new filesystem ideas that should improve squid performance markedly. Fascinating stuff. I suspect the ICS guys will be a little more worried come next bake-off.
Re:Very true (Score:4)
Nothing is a problem once you debug the code.
John Carmack
Re:Architecture (Score:1)
BTW - thanks to the Linux Scalability Project [umich.edu], the 2.3 Linux kernel will perform asynch I/O very efficiently. Netscape is 100% responsible for this - their imap server uses the same asynch. architecture so they patched the kernel for their imap server under the guise of this project.
Rob - you learn anything from this article? (Score:2)
* Use a preprocessor like PHP instead of basing everything on CGI
* Don't use Apache unless you really need to. Smaller servers like thttpd or BOA will often supply everything you need, are much more lightweight and much faster
* Use a web accelerator like Squid
* If you *must* use CGI, see if you can't implement it with something like fast-cgi. Especially with Perl!
And of course, I'm sitting here posting on a web site that hasn't implemented any of the four. Slash's code is absolutely frightening -- all the scripts use the same humongous module (Slash.pm) which use DBI and *gulp* Date::Manip. And you wonder why the site gets slow!
squid bake off? Yum! (Score:2)
Make Seven
Caching vs. Akamai-type services (Score:2)
Almighty squid (Score:4)
Way too many times the open source software is dismissed as sort of a dull knife -- it gets the job done, but doesn't do it in an elegant or efficient way. Take apache for example, how many people rag on apache because of it's focus on compatibility vs its speed?
For Squid, I can't honestly think of a better overall proxy software. If www.proxymate.com can handle the massive amount of traffic it does running Squid on Linux, all but the most stump headed ignoramuses would realize that business needn't drop a couple thousand $$ on a specialized platform.
Re:PFFT, so What (Score:2)