I am happy to “Certify” Squid-Cache version 3.5.19 as
“Works For Me” on
CentOS(6+7), SLES(12SP1), Oracle Linux(6+7), RHEL(7), OpenSUSE(42.1 Leap), Debian(8.4), Ubuntu(14.04+16.04)
HTTP is commonly used as an API for many purposes in any industry and in many cases if you analyze an API specs and output you can see that some thinking was invested in it.
Around the Internet we can find many ideas about API’s and while some are well published others are long forgotten and are considered “old”. It is true that when you look at some of the API’s they might look “cryptic” or “malformed” but these have a purpose. Most of these APIs was meant to be public and as users we have access to all of them. But also many API’s requires some level of authentication or authorization which was clearly meant to not be fully public.
Some hackers around the world see the opportunity to “hack” something when possible. From my own API’s which includes: HTTP, SMTP, DNS, WIFI HotSpot, Moblie and many others it is clear that some might think that it’s funny to send some malformed packets towards a Router or an AP. But I feel that there is a need to clear couple things out for any hacker.
Behind any System on the Internet there is some person which deserves respect. The fact that the API is there means that you are not allowed to hack it by it’s owner unless it was designed for it.
When comparing the real world to the Internet API’s not anyone can enter any door or any place. Not anyone can enter a closed party or a secured area. It would be a bit different since the minimum requirements to enter one place would not be the same for another.
For example, in the hackers world it’s known that there are ways to prove your value and earn your “nick” or “name”. Some hacking cultures are restrictive in their approach and respect any API avoiding the flame of war. While others think it’s better to hack some API as a Proof Of Concept or a Proof Of Knowledge.
White? Black? Green? Red? is there any meaning to all of these?
My answer is that all of these are hats, I do not have one and I do not want one. I am a simple person who has couple very simple API’s under his hands. But I learned that to give a good example is a profession. Specifically it’s not simple to give an example for a hacking kid. If any hacking kid wants to hack something, like in the real world, there are playgrounds for this sole purpose and an example would be canyouhack.it. Also these days if you want to learn how things work in the micro level we have Lots of free and open Virtualization platforms. These exist in any part of the Industry from the electricity level to the application.
All these tools was meant for the sole purpose of allowing the learning curve to be easy simple and safe, to use a real world power tool in an environment which will tolerate things which might not be acceptable in the real world API’s.
Not too far from the invention of HTTP the DNS system was invented and it’s an API like HTTP and many others. It is commonly used over UDP and has a very limited size and format but it has power in the same level as a button on a car dashboard. Technically it can and is being used in many places as a trigger to some system. Indeed UDP is not reliable at the same level of TCP but when the network equipment is trusted then there would be no reason to not use UDP.
A list of things that can be done using a DNS service messaging:
On\Off electrical switch
Identity signaling(AKA Port Knocking)
Queue status updates
And many other uses which can give an example to what an API can look like. I had the pleasure to read couple books about APIs published by Nordic APIs which gave me a fresh perspective on how others see an API and what might happen on the wild Internet that requires attention.
“Works For Me” means that it was tested on a testing environment under real world usage in a forward proxy mode with daily usage traffic such as Browsing News, Video, Learning and Games sites. Special applications that was tested are SKYPE, IRC and couple other applications inside a fully trusted network.
An Advice: Any system which sits against a non-trusted and a hostile public or private network should be “Harden” both in the squid configuration level and other lower levels.
This specific version(3.5.19) was tested also on Intercept proxy mode and ssl-bump but only on forward-proxy and not Intercept mode.
What a proxy is as a tool? is it a war or a life assisting tool?
The Internet is a reflection of the real world, and the world in general can at times be a war-zone but is more of a heaven. A proxy basically is an assisting tool to the warrior of Internet. We can give it a shape of a Squid or of a Katana, but the tool by itself is there to help. And despite to the fact that in the science fiction and fantasy world the image of such a tool might be one, the truth is that it can take multiple forms. Also not every Internet warrior needs the same tools as another. Some needs raw Internet while others needs a more digested one, based on the age and experience.
Compared to the first human which god have created, we are engaging the world in a much higher level then raw basic input and output. And since we are at about the 6k year since this world creation, we have an embedded proxy in each and one of us. Every pair of parents shares with the kids some amount of tools. Yes this tool of war which helps us to digest raw Internet Input and Output.
A wise man once told me “Your tongue has lots of power, do not do harm!” and I was wondering to myself couple years about this fact.
I knew that we have power in our words but compared to the raw hardware we are in a much higher level. We all have a proxy embedded inside of us and this is a fact. Now the question which stands in-front of every Internet user and admin is whether he wants to utilize this tool as an assisting glass to the lower levels of the Internet and build the next and higher level, or to harm what is already there.
Little by little in life we discover our proxy powers and we can choose to either take these into our hands and to do good, or to use these powers in a way that will shame our form as a creation of a much better world. Yes, despite to what many non-experienced kids say we have a a very good foundation but we need to maintain it.
I was asked couple times “Why do you package?” and the simple answer is that I need it. But I must admit that it’s not because other developers and packagers doesn’t do a great job.
It’s just that I have couple specific systems that I operate and I need them to be up to date and until now I didn’t got the option to have someone that will take enough on his shoulders to make it happen in an Enterprise level.
And of-course it’s not that they do not want to, but since any Enterprise level distribution gives a good foundation for a very complex systems. When they develop and package they test and test and many other things until they can Insure themselves and their clients.
It’s as simple as this. If in some case you have found an issue, they should help you with it by the level of contract you have with them.
For example since I have a Windows licensed PC and a licensed SLED Laptop I expect them to release at-least security updates.
I know that every major Linux Distribution does it’s job in a very good way. I know it since I can see their src.rpm packages and compare between my work to them.
Basically if you are a young man or a kid, the example from the scene at Toy Story which handy is opening the presents packages at the party and then he finds “Buzz light year” in a big box would describe to some how handling a package shouldn’t happen in the Enterprise level businesses.
(from about 09:00 to 14:00 minutes in the movie)
An admin should rely on the packages and packagers in general but he needs to plan and understand the nature of the update in order to anticipate what could be done in the update of the system.
If you have a big system which clients are using you cannot just run “yum update xzy” because you want to update. And specifically with squid you need a strategy that will allow you to sort of “bypass” this specific server and use another server in the mean while.
On small scale it’s simply adding another of squid instance on the same machine or to access squid using a haproxy LB.
It’s not really requires magic just basic skill knowledge and responsibility, if you are lacking one of these you should invest some time to acquire theses skills.
If for some reason for your or others business the generic packaging doesn’t fit then you should understand that their aim was not you exactly but to another audience. And yes… it’s hard to see a package that does weird things like giving suid (chmod u+s XYX) to a file which actually is just a tiny script which should never run under root. But these things happens when the packager is novice enough to not understand they meaning of the work.
RPMs Automation: Here I come!!!
The squid-cache project and many others do not Package things in binary format and it’s important to understand that every Open-Source project probably have limited resources and goals and squid-cache is probably one of them.
All the sys-admins that uses my RPMs will probably enjoy them for a very long time but I have started to move from the manual hard labor RPMs creation to an automated one.
It means: That the packages will be pushed to the repository automatically for any squid release. My plan is to “certify” each of the RPMs or a release by a set of tests which are mainly manual rather then automated.
I haven’t decided yet what to do when a release fixed one bug but made another one but I will probably bump the RPMs version after basic testing.
And I would like to dedicate this amazing remix for all the amazing young IT industry mages that works many nights to give the Magical Internet be has great as it is now.
The above link was removed so another copy:
I am happy to release the new RPMs of squid 3.5.12 for Centos 6 64bit, 32bit and CentOS 7 64bit.
The new release includes couple bug fixes and improvements.
I have also took the time to build the latest beta 4.0.3 RPM for CentOS 7.
The details about the the RPMs repository are at squid-wiki.
Why 3DES (triple DES)? or The fall of DES.
It is known in the cryptography world that since 1997 DES(IE single DES) is vulnerable to some attacks and there for is being considered to be unsafe for some uses. In order to resolve the DES issues the 3DES was implemented due to the ability to use the same fast cryptography machinery\chips that was used before and by that giving some time to the industry to find another more fit solution.
Some words about the DES encryption from Professor Gideon Samid:
Hashing compared to Encryption
The difference between hashing to encryption is the ability to recreate the original digested content. Hashes are meant to allow some kind of content validation verification based on the low probability of math collisions. To give a simple example about the subject we can use the Quadratic Formula:
The formula defines that it is possible (or it is always the right answer) to have two answers to the same question\issue\variables.
Based on the fact\assumption that there is a possibility for two answers\solutions to the same unknowns+function we can use a function to describe more then one number. And in the case of computers which everything is some kind of a number we can convert the unknown numbers to octets.
Once there is no difference between numbers and\or octets and letters and we are in the function computation world. There we can use all sorts of functions\equations in order to describe all sorts of numbers and by that letters.
Eventually hashes are some kind of known functions which implements some way to reflect very big numbers or very big documents in some kind of output . Technically speaking it’s some function\method that is guaranteed to reflect very big numbers with probability(high or low) that multiple input values will be reflected with the same output number(128 bits for example).
In many levels of applications some hashes such as crc32\md5\sha-1\others are being used and these applications allow them-self to validate content integrity with a fully “vulnerable” hash due to the fact that the validated content do not exceed the function collision sizes.
I must admit that I have used MD5 and many other hashes for a very long time and the only collisions that I have seen that affected real world applications integrity are that of CRC32 hashes, maybe I have not seen enough yet!
And couple expert words from Professor Gideon Samid on hashing:
Disclaimer: I am not a cryptography expert!
This RPMs release was tested for:
ICAP 204\206 compatibility (non ssl)
ECAP passthru adapter which digest response body using SHA256
Basic load testing
Basic ssl-bump usage in strict forward proxy mode
Basic denial of memory leaks on a long period time of operation
Basic build tests
All the above was done on a CentOS 7 x86_64 VMs.
I have not tested everything on CentOS 6 since it is assumed that if it works good on CentOS 7 there should not be a special reason for it to not work on CentOS 6.