When is the kid considered stable?
When is the software stable enough?
I am not developing software daily but it seems that the inner debate on the stability of our different “kids” and fruits of work is always there. Every time when I consider something as stable enough for production some other voice adds a doubt on the stability of a software.
Specifically, in the squid-cache and the open-source world the question is always on the board or the desktop. The source is open to everyone to find the next bug. Some are happy with what they have already while some expect the equivalent of a Ferrari.
When I am writing a software, I am doing my best effort to write it with the main goal that human lives can be entrusted to this software. One of the reasons why I am trying to meet such a goal is that in real life I try to do the same. When someone asks me a question or turns to me with a request or a word, I know that he asks for a reason. There is no chance in the world that the occasion is a result of only “A series of of unfortunate events”. I was asked in my past couple times “Why do you bother to answer?” and the answer is the simplest: the sanity of the other party is in my hands.
Sometimes a REDIRECT is the right answer but never DROP or REJECT. These are the actions of “war” and when we are talking about HTTP in general there is a war out there. I had the choice in the past to work in couple layers of the Internet from the hardware to the application and I choose to invest lots of time on layer 4 and above.
There are many tools in this warzone and every time that a new tool is in use it get’s it’s own life cycle. The issue is that it takes time for every tool to become mature enough to serve different purposes.
Squid 3.5 is already in use by many users and admins for a while and is considered Stable for a very long time, but now aging start showing up. There are different levels of maturity but the basic one is a period of 30 days uptime. We do expect more but a restart once every 30 days without any crash would be considered stable.
For now I am in the hunt for fatal bugs on the 3.5 series. The reason for this is to measure the maturity of the the branch!
Lately I have been working on couple tools and one of them is a library for distributed rate blacklists querying and blocking. Using this library I wrote an external acl helper for squid that can help many admins to use OpenDNS and Symantech or other internal DNS Blacklists.
The library sources can be found at:
And in binaries package at:
squid.conf example of usage:
external_acl_type dnsbl_check ipv4 concurrency=200 ttl=15 %DST %SRC %METHOD /opt/bin/squid-external-acl-helper -peers-filename=/opt/bin/peersfile.txt
acl dnsbl_check_acl external dnsbl_check
deny_info http://ngtech.co.il/block_page/?url=%u&domain=%H dnsbl_check_acl
http_access deny dnsbl_check_acl
Example of peersfile.txt:
The syntax of the above file is:
type<space>address<space>path(for http services)<space>port<space>rate for the host(uint)<space>address which will indicate a blacklisted domain with spaces between them.
The type options are:
The http and https destinatiosn are queried with a HEAD request to the host and path and a match will be reflected in the response headers with “X-Vote” value to be “BLOCK”.
The dns and dnsrbl will be a match for one of the addresses which are defined from the six value on the definition and on ie matches in the line:
dns 220.127.116.11 / 53 128 18.104.22.168 22.214.171.124 126.96.36.199
would be the addresses:
188.8.131.52 or 184.108.40.206 or 220.127.116.11
and the weight of a match is 128 and specifically the default match weight of the helper is 128 and this line would be a match and no more lookups will be done. In the case which one list is not a match each of the listed would be tested until either found or the timeout(default 30 seconds) is reached. If the timeout or the list of peers was not found as fully matching the weight, the request will be allowed.
With this tool you can use regular DNS services on your system and on the proxy intercept the traffic and get a decision using a “consultation” with an external system not compromising your clients with special block pages that the dns redirect towards.
So we have both a proxy and a simple tool which can help us to prevent access to specific sites. The stability of Squid for this release is considered “Very Stable” but yet to be tested on a larger scale then 400 users. If you are managing a system which runs squid for filtering or caching that have more than 400 users please send us some input from the squid manager info page so we would be able to rate the state of the software.
I am planning to write a tiny tool\script that will help to scrap the squid manager info page and send the Squid-Cache systems some feedback. If you are a squid system administrator which are willing to share some statistics on your system with the project, please contact me at: firstname.lastname@example.org
I believe that if we could gather enough statistics we would be able to declare that the software passed the “masses” test compared to couple single systems.
All The Bests,
On the plate:
- CA certificate test and installation html page (example page)
- Windows Root CA installation script (example page)
- Debian and Ubuntu Stable and Beta versions repository(without ecap support).. takes time to prepare
- ICAP DRBL query service
– Package of Binaries Sources and scripts
– Sources and startup scripts on github
– I have hope to publish the tool in RPM and DEB format
- Squid 4.0.17 Basic functionality tests .. takes time to prepare