Paranoia Vs. Transparency And Their Effects On Internet Security

Reactions to non-intrusive probes and network activity that is merely unexpected are becoming increasingly hostile; a result from increasing amounts of incidents and security threats. From my perspective of security, overreactions to activities not crossing authorization and legal boundaries, are leading to a scenario where anyone acquiring basic information about a system needs to be afraid about potential consequences. Seen under a wide scope, this leads to network security no longer being transparent.

Why a non-transparent security situation on the Internet is bad? Obviously, it is a big advantage to malicious intruders who have no legal concerns because they can conceal their identity through compromised systems, and a big disadvantage to security firms, admins and individuals who depend on a complete picture of Internet security problems to solve them. Non-malicious, beneficial large scale scans like the broadcast amplifier scanning projects are becoming harder and riskier to perform using legal resources.

Network scanning and corresponding tools evolved out of a necessity to counter new intrusion methods after they were commonly employed by system crackers. [1] A scanner is simply security software that automates the process of making connections to a service to determine its availability and version, which allows drawing conclusions regarding security and potential vulnerability. Scanning a host is the fastest way to identify its remote vulnerabilities since it puts the analyst in the same perspective as an attacker, seeing all possible holes.

The cause of todays widespread security problems is that people ignore security measures that are merely common sense. Many sites exist with gaping holes because their admins just don’t know any better. They don’t see a threat to their small unimportant site. Which is wrong, since the Internet is a network with literally millions of potential intruders, and the majority of intruders, no matter if kiddy or criminal, select random targets to compromise and use as their resources, which means that each site is at risk equally. Another big problem is that many admins lack the time to investigate all potential security issues, let alone all new vulnerabilities and advisories. As it is currently a part of my work to read and evaluate all information from the most important security lists and sites, I can say it is a task that takes at least one hour each and every day, and another hour if you really want to understand everything you read. This adds to the negative effects of information about security of a broad range of Internet hosts not being openly available. Since it is so difficult to obtain statistic information on widespread security issues, there is little awareness on the security issues that are really important, and it is a lot harder for the average admin to determine what security issues to check and protect against with priority out of the mass of security vulnerabilities and problems that are known today.

I believe the problem of networks with gaping security holes has grown larger than most people, including most security professionals, expect. The result of a recent study of a research group was that 50% of all smaller enterprises are going to have to deal with intrusions by 2003. The problem of raising awareness to security problems is, that security news, incidents, and publications of security tools and advisories only generate more awareness for people who already have a basic knowledge of security. But a lot of people responsible for Internet sites still don’t have enough awareness to take the very fundamental steps to protect against intrusions. They will never seek security services themselves, either. Battling incidents and insecurity on the Internet is a question of reaching and contacting as many people of this kind as possible. In this context, large scale auditing and gathering of vulnerability information could be a viable tool of identifying and notifying these people; you could even see it as a process of mass security education.

Transparency, in this context, means the possibility of freely accessing hosts and networks in non-harmful, non-intrusive ways for the purpose of security reconnaissance, without being seen and treated as malicious attacker. The importance of network transparency is comparable with the reasons for publishing advisories and exploits in the name of full-disclosure. The process demonstrates how exactly security issues are a problem, and how they can lead to incidents.

Arguably, the recent popularity of Intrusion Detection Systems is not a bad trend. IDS capabilities can be viable for detecting and blocking intrusions, when they are employed by someone with sufficient background knowledge to make a difference between serious signs of incidents and harmless reconnaissance or false positives. But intrusion detection is not the only thing that can be relied on, it is just a part of the reactive protection measures, while assessment and scanning constitute the necessary pro-active measures.

And performing pro-active security measures beyond your own network is justified, considering the fact that on a public network, our own security is always threatened by the security problems of others. Without machines in all parts of the world being compromised, attackers would hardly be able to strike anonymously and cover their tracks in a meaningful way. Spoofed packet attacks, DDoS agents and trojans used for relaying connections, as well as compromise of related hosts via password sniffing, would pose a less serious threat. Eliminating this threat can only be in everyone’s interest, primarily for those admins unaware of security, who have their sites compromised and unknowingly used in attacks against third parties. [3]

Of course, the toleration of any client activities on a host is always a matter of trust, a concept that I don’t even want to start discussing. But fact is, in the case of malicious intruders and “aggressive” scans, nobody has a choice of accepting them or not, since they usually come from another compromised machine, and even if not, there are hundreds of other potential attackers waiting out there for every one that you manage to track down. With links to the Internet you are part of a globally accessible network, which means the best thing to do is turning off the services you don’t want to have accessed, or set up access controls and firewalls, which is encouraged, but rarely done consequently in practice.

A situation where I see a direct justification of scanning is, for example, when doing a financial transaction over an e-commerce site. Personally, checking out the general security of a site, as a consumer before submitting billing info gives me more security than any certification can. I even see this as advantage for the company offering the service. If they have poor security, people would stay away from them, or possibly notify them, reducing their costs by preventing incidents (and the accompanying lawsuits of customers who have fallen victim to an attack). If they have good security, people would know it and prefer their services.

Another example is the spam problem. When receiving unsolicited mass mails in annoying proportions, I think it is justified to examine the third party smtp server, from which the mails were relayed to hundreds of addresses without authorization. Often, you can determine a lot of problems with such systems, they are mostly excellent examples of sites totally unaware of security. In that case, it’s time to explain the admin a bit about network security and third party responsibilities. I think if more people would do such things, even be encouraged to do it, cybercrime laws and government regulations of IT businesses’ security would eventually become superfluous.

The criminalization of scanning and the general access of network services that some people don’t like to have accessed – already, the current laws can label almost any activity on a network as intrusion, because they can be interpreted arbitrarily – will ultimately lead to a situation where companies and individuals performing scans and network surveys for security relevant data are going to have big problems, while system crackers using illegally acquired resources can effectively still probe and attack any site.

The situation full-disclosure security measures is on its way to get worse, perhaps a lot worse, as governments try to introduce legislation like the international convention on cybercrime, which would criminalize anything from sniffing and using crypto on your own network to the possession and development of security tools, let alone remote network activities. Without calling this trend an evil government conspiracy, you can safely say that people working to advance such legislation are not acting in the best interest of security and e-commerce, not solely out of stupidity or lack of knowledge, but because there are lots of people getting advantages out of criminalizing benevolent security practice – think of new government jobs, legal powers over the security industry, and the possibilities for domestic surveillance.

If the government and the security community decides that consumers and users on the Internet, who are directly affected by the security of their peers, should not have the right to scan, then their only recourse will be legal.

[1] An example for this trend is the popular paper “Improving the Security of Your Site by Breaking Into it” along with development of the first widely-used security scanner, SATAN. More here.

[3] Legal liability for compromised systems that unknowingly participate in incidents, such as DDoS attacks, may be enforced more strictly soon.




Share this