March 2010 Archives

Planning security: breaches in the wall

| | No TrackBacks
A breach in a wall

The weakest link

People are the weakest link in any security concept.

This is why security should be viewed as an ideology, not just several actions performed once.

A common mistake is to make the outer bounds of security as solid as possible, leaving internal areas virtually insecure. In other words, the intranet might be as shielded against any possible intrusion from outer Internet as possible, but intranet itself might be very vulnerable.

If there are Windows workstations, they should be all shielded as well. Antivirus software, personal firewall, antimalware — mandatory bricks the security wall consists of. All the security updates must be applied as soon as possible; unless there's a real need, software must only be installed by system administrator.

In short, the optimal approach is this: plan the security concept, as if every computer included in your intranet is accessible from Internet by anyone, and no guarantee may be provided its users(s) are all careful and literary to avoid all the possible dangers.

A Trojan horse «caught» by a user on its intranet computer might nullify whatever security measures have been taken so far. And this is just one of examples of how vulnerable becomes a network if even a single computer lacks appropriate security measures applied.

An ideal security concept

The most important, most fundamental principles I build my security concepts upon are given below, just to give an impression.

Nothing is safe enough. Not a single computer, network device, whatever else participating in intranet. No device may be considered safe. Apply the same security hardening steps for every piece of your intranet.

No default values. Nothing may be left as programmed by default, if it should be used for real. No factory-generated passwords, no default names (identifiers), if they may be changed.

All the passwords must be young: this is what all the users hate: inventing new passwords regularly. To teach people simple ways of creating strong passwords isn't an easy task, yet any weak password may jeopardize the entire security.

Make backups often. Backup everything. Every piece of data, related to setting any service, piece of hardware etc. Backup often, keep copies in several places. Remember that people belong to one of two classes: those who haven't lost any data of importance yet, and those who make backups on regular basis. Log files should also be remembered, since they can serve as proof of whatever happened, fi you need to investigate anything later.

Any change is suspicious. Literally: any. Change of password, performed from unusual place. Change of IP addresses the user(s) used to log in from. Change in Internet resources visiting pattern. And so on. Acquire a habit to study every change that looks even a bit unusual.

No precaution might be superfluous. Again, literally: no precaution actions may be considered superfluous, extra, unnecessary,weak..

What important parts of the concept I might omit?

Windows security: no right, no fright

| | No TrackBacks
A crown

Defaults aren't best

According to a report by BeyondTrust, 90 percent of Windows vulnerabilities can be mitigated by configuring users to operate without administrator rights. In other words, the less powerful is the user, the less unpleasant consequences follow.

System administrators do not grant privileges easily: the more is the number of privileges, the more powerful is the account, the more destruction it might cause. Many an action, relatively safe when no privileges are available, become very dangerous otherwise. However, all the personal PC users are tempted to grant their main users all the rights. Yes, it is simpler: no need to logout and log in as an administrator, if it is necessary to install or upgrade software and so on.

In total, 64 percent of all Microsoft vulnerabilities reported last year are mitigated by removing administrator rights. The default state of an operating system when user has too much access has become very dangerous.

No power or superpower

The security model of Un*x means you have either all the rights possible (if you use a root, superuser account), or, roughly speaking, you have no rights at all (if you use a regular user).

The very idea of using privileged user all the time must be discarded and strongly advised against.

It is well known that there are two types of people: those who have never lost important data, and those making regular backups.

Of course it's joke, yet it bears an important truth: the less time you spend restoring lost or corrupted data, the more time you have for more important tasks.

So, the conclusion is:

  • avoid assigning users any privilege unless it's absolutely inevitable
  • use privileged users to perform local administrative tasks only

Have you experienced any problems working under a privileged user account?

Network monitoring basics: teach your sites to express themselves

| | No TrackBacks
Spyglass - Watch your sites

Sites are important

The more you are involved into Web presence, the more important are sites you maintain.

The exact type and function of site(s) isn't important. They all must be available all the time, and the more complex they are, the harder is to keep tracks of their state.

Even if you keep a blog, run under the ubiquitous Wordpress (or any other engine), there are many things to watch.

All the pages must be available, they should load quickly.

There should be no unmoderated comments: when visitors spot definite spam comments, they may leave your blog for good.

And so on and so forth. What is the easiest means to make sure the site is available?

Monitoring tools

There are dozens of monitoring tools. You might have heard about Nagios, or any other common tools. The only problem is these tools require your deep knowledge of operating system intrinsics. And those developed for broader audience should be powerful and multifunctional ones. This is why I offer you to test IPHost Network Monitor.

OK, now you have a tool at hand. What shall you check?

Special pages for special cases

The obvious solution is to load a page — say, a start page, to make sure it works.

However, it is not the best possible solution.

Monitoring assumes the page is checked on a regular basis. Important resources should be checked often. If a site is complex and updated many times a day, very such check means wasting resources of the server.

If you run several instances of a monitoring tool, to ensure the connectivity is checked as well, the load will grow respectively.

One of solution is to load a blank — or at least a static page. However, that will tell your monitor nothing about all the other subsystems. On a properly set up site there are many services that should be available: DBS, email system, Web services and all its components. Perhaps many other services as well. Loading blank page won't tell you if your site feels well.

The best solution in such a case is to make a special page, displaying short, but essential information about all the subsystems. It can be generated by a standalone process that checks the health of all the site's components and displays an easily understandable string, In such a case all you need to do is to find that string in the page loaded. And the page itself might be very small.

Do you have your own solutions for this tak? How would you make sure the site works fine?

Intentional insecurity, or They deliberately open links in spam

| | No TrackBacks
Spam message

The «human factor» of security

Human beings are the weakest link in any security system.

You can explain why spam messages should never be opened, if it's evident from their look why they're spam and nothing else.

You can provide details on how spammers perform monitoring of their victims' email boxes reliability (by providing specially crafted links to external images, for example).

You can prove that spam is only existing because people open spam messages and perform whatever actions are expected (click on fraudulent links, buy the promoted goods and so on).

You can provide whatever else you can say about spam, its danger and consequences of not destroying it on sight — and fail to prevent people from opening spam messages deliberately. That's the most strange thing about human beings' nature. Curiosity, carelessness, whatever else, it makes sending spam messages a profitable business.

Ars Technica posted an interesting article about users clicking links in spam messages. Looks like the relatively young users, considering themselves experts in network security, are those encouraging further spam sending.

The ABC of email security

The main principles of email-related security are well-known, yet they are at times neglected. Let me cite some of them.

All the fields in an email message can be forged. It means you can't rely upon any field, you can't be sure the message is genuine unless using additional means to verify that.

Some of the means, such as SPF check, using real-time block lists (such as those mentioned at Spamhaus) can filter off most of spam messages, recognize them on entrance.

One should never click on links in email messages, unless absolutely sure the message is 100% genuine and secure. In fact, it's safer never to open whatever links are in email messages, even if they are legitimate. Even if link doesn't make use of your OS, or software (such as the browser) possible vulnerability, it can help the sender to track your email address and make sure it does accept email messages. Expect spam flow to grow after that.

In case you are urged to perform any action, check first that the message came from whoever claims to be its sender. If it is claimed the message comes from your bank, call your bank's office, make sure the email is legitimate and true. And so on.

Spam messages never address you by name. Why? because they don't have it. because they don't care. Remember: the sound of one's name is one of the best sounds. Every legitimate email sender, if addressing you personally, will always address you by name. It doesn't mean that every message where you are addressed by name is always legitimate, but if you are not addressed by name in seemingly personal message, be wary about its contents.

Teaching network security

Nowadays the network security is taught the hard way: people suffer from consequences of their lack of knowledge about security, and only after that the victims do start to take security seriously. At times, at least.

The only rule could defeat spam and whatever dangers it spreads: do never trust whatever you see in spam message, do never open them at all.

Have you ever opened a spam message? Why have you done that, if the answer is positive?

Reblog this post [with Zemanta]

Social networks: beware of imitations

| | No TrackBacks
A man wearing a  mask

Social engineering in action

Social engineering was, is and will be one of the most powerful tools to make people disclose whatever is required from them. A very typical example is mentioned in a TechCentral.ie article, telling how a security expert broke into TV star's Facebook account. The scheme itself is so simple and efficient, that it is unlikely it will ever stop working.

To those unwilling to read the mentioned article: the idea is to imitate an account of someone close to the person you wish to trick. Whereas a human being nowadays might be wary against trusting a stranger, it is so obvious to trust someone you know. And if that person's personal data, including photos, bio details etc are freely available on the Net, it is possible to trick anyone into believing you are dealing with the person you might know very well.

The age of trust references

The main problem of modern Net is the massive loss of trust. People have to prove their identities, we are taught not to trust whoever we encounter in cyberspace. Personal photos, scans of documents and so on can at times be found easily, and that makes all the identity proof actions more and more unpleasant.

The problem is not to determine the identity of a person contacting you for the first time. It is a common practice to place phone calls, to ask for utility bills and so on to make sure you are dealing with exactly the person you think you are dealing with.

The problem is this sequence of identity proof actions may become a must every time you contact anyone on the net. What are the means to prove one's identity?

Physical presence is almost always impossible.

Papers and other physical documents. The only problem is they should be somehow tested for validity, and it is impossible to make use of notary services every time you need to confirm who you really are.

Digital signatures (such as GPG/PGP key pairs) are good, if the owner doesn't lose keyphrase and makes sure to revoke the key quickly if it leaks to a third party.

Biometrics data may seem very reliable, but it's very inconvenient and expensive to check that data wherever a human being identity is moot.

The loss of anonymity is an important issue, as well. We are being told the anonymity is evil, yet the reality demonstrates the problem is not in anonymity, but in incompatibility of most human society norms and laws with cyberspace.

Currently a number of means are used to check the people are actually the ones they claim to be — phone call and so on. However, if someone impersonates your close friend and you trust that person (trusting their Internet presence), the trust into the real person will be affected as well.

The primary damage of cybercrime is we are being persuaded not to trust whatever we see on our screens.

Conclusion

There always are means of conatct a person seldom or never shares with all the world. Only with most trusted people.

Also, in case a real trust is to be created, you are better to meet another human being personally. At least once.

In all other cases I can only suggest to remain wary if you are contacted by a person that claims to be a friend or relative of yours, until you contact him or her by other means (those personal mentioned above) and make sure cyber-image belongs to known physical entity.

And the rule of thumb remains the same: do not seed your personal information, unless you know what you are doing.

After all, the loss of trust could be much more grave when compared with any other consequences.

Blogging on: watch your blogs heartbeat

| | No TrackBacks

Typewriter

Blogs: uptime is important

Blogs aren't just «personal diaries» any more. They are being used to generate income; thus blogging became a business. Just like any kind of business, blogging may suffer a lot if blogs become slow to response, cease to be available and exhibit a number of other undesirable traits.

It's hard to watch even a single blog if it comes to keeping it clean. A number of parameters should be within acceptable bounds: page loading speed, documents presence (no '404' errors) and so on. Blogger should be able to notice and moderate incoming comments and trackbacks, make sure the announces of new posts are spread through statusphere etc in time and so on and so forth.

Most of blog' status data may be obtained by calling a specially crafted URL (using GET method) or posting data to an URL (POST method). In other words, retrieving such data and analyzing those may serve as a good means of blog health monitoring.

If a blog experiences a problem, it must be identified and handled as soon as possible. Blog's downtime may cost very much, thus one should monitor all the data reflecting blog's status.

Monitoring: looking for an optimal tool

Network monitoring isn't a new idea; there are hundreds of tools, services and so on. The question is what to choose and why.

First, you could think of monitoring services such as mon.itor.us.

Pros: they often perform monitoring from a number of locations; they are often inexpensive or free; they are easy to set up.

Cons: these service often lack a number of useful features: variety of network protocols one could use; no convenient means of alerting (such as via SMS and/or Jabber messages). Also, if you need to monitor a restricted access data, you could have to grant access to such info to the service. It may be inappropriate.

Second, you could install server-side monitoring software, such as Nagios.

Pros: powerful, efficient; the functionality such software can often be extended using plug-ins and so on.

Cons: you require certain amount of knowledge in operating systems and related areas to install, run and troubleshoot such pieces of software. Also, such software often lack convenient user-interface.

Third, you can use desktop monitoring software such as IPHost Network Monitor.

Pros: desktop monitoring software is often easy to use and easy to understand. It can be extended using well-known monitoring products such as Nagios. Also, you could access restricted areas of your site by providing correct credentials to the configuration data of the monitored resources.

Cons: the piece of software must work as long as possible to gather good amount of service stats. Also, it can cause high CPU load and consume significant amount of RAM and disk memory.

Conclusion

No service/product is ideal, in fact. The optimal setup could make use of two or more of the mentioned choices

For example, if a network of server/services could supply their data to the desktop software, it could expand capabilities of both.

Until then, desktop implementation of network monitoring software seems the best option for the absolute majority of bloggers wishing to be notified first, when their blogs experience any problem.

Can you spell 'password' or The weakest link in security

| | No TrackBacks
Binary digits - viewing file

Strong password, weak password

Are your passwords strong?

An average Internet user registers at dozens of services; and almost everywhere one has to invent a password. There are few alternatives to good old password verification, some of them are useful, some are not so.

«Convenient, reliable, cheap: choose any two». To have your data secure and access them quickly, it is necessary to use authentication method both simple and reliable. So what are the typical approaches in creating a good, strong yet easy to remember password?

I say «easy to remember», but passwords, at times, need not be remembered literally. The method I use to invent hard to guess and easy to remember password is simple: several words are used, interspersed with commas, dots etc.; and a group of digits is placed somewhere inside. The words may be taken out of any source: the idea is to keep that source mostly unknown to everyone else.

I won't post long and solid recommendations on how to invent a password. There are many already on the Net; and if you're out of ideas how to make a password, there are many well-known pieces of software solving that task — starting with pwgen.

The question is, how to make many passwords without using the same password for several types of services (very insecure; if one of those leaks password somehow, a number of your other resources will become very vulnerable.

A true crypt for passwords

Well yes: I write passwords down to a file and store the file at hand. Insecure? Not quite.

First, I use Truecrypt freeware to generate encrypted volumes. A good means to store one's passwords and any other sensitive information. Yes, it also requires a password to mount Truecrypt volume; I also use keyfiles; if I lose my flash drive where a Truecrypt volume is stored, it won't be deciphered in absence of a keyfile.

The passwords are recorded in a file, but not in their plain form. I use mnemonic rules and tokens (to store only those parts of passwords that can be restored). A good approach as long as I remember the password for the encrypted volume.

Open your ID

OpenID can be a good alternative to storing many passwords: if a service supports OpenID authentication, you only have to remember the password for your OpenID provider account. There may be many. Google's user profile URL may serve as OpenID; LiveJournal account, Chi.mp and many other services, including openid.net

There are only two problems with OpenID solution of the password problem.

First, you need a reliable OpenID provider. If it's gone or changes owners, you can as well bid farewell to resources where you've used that OpenID identity.

Second, if you lose control over your OpenID account, you can lose all the resources where you did authenticate with it. It's like placing all the keys on a single keyring: it may be a good way to have all the keys at hand and lose them all as well.

Myself, I prefer to use either my own OpenID server or well-known provider, the one that will most probably stay with as for many years.

Other doors, other keys

OpenID isn't the single alternative to name and password combination. There are other protocols and techniques (say, OAuth), but in all of them you should remember at least one password. And it's better be long and really hard to guess.

It should be understood that you can't afford neglecting security the moment you have any sensible data on your computer or on the Net resource.

Weak password means a number of troubles, sooner or later (most probably sooner).

Security isn't a oine-time action. It's a mental discipline, the way of thinking. Create a reliable means to store passwords, to restore them; make sure they are all different; change them often — these mantras will remain important for good. yet not every people on our planet abides by them.

What is your way of making your data protected?

A door with no keyhole: port knocking at work

| | No TrackBacks
Door knocker

Standard vs. non-standard ports

The first problem system administrator face when setting any server accessible from Internet is to handle all the 'hackers' attacks, those aimed at gaining access via SSH and other means.

My experience shows that even if domain name was never exposed on the Net, it starts being under constant «knocking» almost immediately.

First, it is annoying.

Second, if there are users with weak passwords, it's constant security risk. No matter how much are the users held responsible, system administrators will have to clean the mess.

The simplest approach, when intruder is banned (firewalled) by IP after certain number of connection attempts isn't the ideal solution: IP addresses may also be spoofed, and a malevolent user may effectively block legitimate access from a given IP(s).

One of simplest solution is to use non-standard port numbers for standard services. They should be chosen wisely, in order not to interfere with other well-known services using the same port number. Also,when and if the «secret» port is leaked, we return to the original problem and should either choose another port number, or use another approach altogether.

Port scanning can also nullify the efficiency of non-standard ports, if connection isn't restricted to only known IP addresses («strict whitelisting»).

Port knocking

There's a solution, named port knocking. In short: it means the process/person wishing to connect, first tries to connect to a known sequence of ports («knocks»); if it knocked correct, its IP is granted right to connect to, say, SSH port.

The method is both simple and complex, has its advantages and disadvantages.

To implement port knocking in case of, say, iptables (netfilter) a special module can be used. Otherwise, a special daemon process may monitor firewall logs and, when detecting knocking, add (temporarily) a rule allowing connections for the knocking IP. The rule should be removed when the connection session ends.

The major disadvantage is this: if the watching daemon dies and/or starts to malfunction, you are effectively locked out of your server. Also, the knocking attempts may be recorded in logs in different order (although this is not a major problem, if the exact sequence isn't important).

My further experiments allowed me to admit that a «hybrid» security approach can be used in most cases.

First, I add permanent rule to allow connections from certain IPs (namely, the IPs of computers I mostly work at). That saves me from the worst scenario, when port-knocking daemon process dies for some reason and can't be automatically restarted.

Second, I set up a daemon process monitoring port knocking access able to restart/reload the control process if necessary.

Third, I use non-trivial unused unassigned port numbers.

Fourth, to prevent my administrative IPs spoofing I use encryption to prevent IP spoofing (to ensure it is me having initiated the connection).

So far, the method allows to ward off most of «knockers» wishing to find a weak password.

Perhaps you could suggest even more secure, yet still reliable and not-too complex scheme?

Reliable email service: keep everything in order

| | No TrackBacks
email_icon.jpg

Email is still important

Email was one of the first communication means invented. Since the very first days of Internet, email as and remains one of the most convenient tools of communication.

It remains one of the most important and quick means of communication, regardless of all the latest plagues like spam. Hence the task is to make this service as reliable and efficient as possible.

This means all the underlying services must be reliable and efficient.

What should we watch

The term «email», in fact, is more than ability to receive and send messages.

To receive email, we would typically use POP3 or IMAP4 protocols. A Web-mail could also be used to work with email (Webmail is often an additional service communicating with underlying SMTP and POP3/IMAP services).

To send mail, we should make use of SMTP.

The mail is often scanned for possible threats (viruses, malware etc). Thus, a virus scanner is required.

The known ports should be open and available for connections.

To receive email,one should usually authentication. Thus, the authentication service(s) must also be available.

If you keep remembering what must be functioning properly, the list of these resources might as well reach several dozens. Server load, disk space, ability to connect to outer world... and so on and so forth. So the phrase «make email service work reliably» assumes quite a lot of monitoring all the underlying services and resources.

The solution

The solution is simple: all the services, resources etc involved must be monitored; and any possible problems with any one of them must be addressed as soon as possible. The question is how to perform the monitoring, what tools to use.

Two days ago we have announced the availability of a major update of IPHost Network Monitor software.

The way we are ensuring our corporate email services works flawlessly is to monitor all the services and resources mentioned above. SMTP, POP3/IMAP4 services, connectivity, server load, disk space, mail queue and all the underlying services must be available and respond quickly enough. The task is to set up the delays and response times that should be seen as appropriate.

An important note: in case you wish to monitor and troubleshoot email service, it is advised to use alert type different from emailing the administrator. To email administrator about email problems might result in no alerts received.

Talking of myself, I prefer Jabber alerts (for less critical events) and SMS (for most critical ones).

IPHost Network Monitor: a major update, 10-th of March, 2010

| | No TrackBacks
IPHost Network Monitor

What is IPHost Network Monitor?

We are proud to announce a new release of our Windows product, IPHost Network Monitor.

In short: this is a piece of Windows software, developed to make it easy to monitor a number of network resources from a single command center, and alert you in a variety of ways about any events (monitoring alerts) requiring your attention.

To learn more, you could visit the site of IPHost Network Monitor; meanwhile, let me explain the importance of monitoring in general.

Nowadays, the process of maintaining even small networks and standalone sites becomes quite a problem. If a site is popular, every malfunctioning of its components may result in massive visitors loss; if the site is commercial, in a loss of customers.

There are many a service and product capable to perform tasks related to network monitoring. However, it's more convenient to have a single, integrated command center to monitor your resources from.

This is what IPHost Network Monitor has been created for.

Last changes

There are many smaller efficiency, stability and bug fixing changes in the latest build. Most important is that a new alert type is available alerting SMS over GSM modem, allowing sending SMS (text message) over GSM modem or cell phone attached to the computer for a more reliable notifications delivery method.

Subscribe to our news to be aware of all the major updates of the product. We value your feedback very much!

Reblog this post [with Zemanta]

Virtual Private Networks: stay invisible on the Net

| | No TrackBacks
Tunnel - an allegory of VPN

Secure surfing

A concept of individual security on the Net includes secure surfing. To use proxies is not enough: your ISP still has the power to monitor all your activity and record your traffic, especially if you use your provider's DNS servers.

VPN, or virtual private network, is a network based upon an existing network; all the traffic through VPN is «invisible» — encrypted, made indecipherable. VPN services are used by many corporation, since these networks proivide high level of internet security when used properly.

There are many VPN products and services, both open source and commercial. Notorious open source OpenVPN is one of the most popular solution in case you do care about secure access to the resources of your private network (intranet), or to any standalone server's resources.

However, you can use VPN services free of charge, to make Internet surfing more secure.

Beyond hiding one's traffic from prying eyes, VPN can serve other needs. For example, it can allow accessing otherwise restricted or blocked sites (yes, Internet has places blocked for certain people).

Free VPN services

There are several VPN services offering free usage. I think ItsHidden and UltraVPN. Both offer clients to access their networks; however, one can use other VPN clients to connect. such as Shrew Soft.

Although, for example, ItsHidden doe s not provide a Linux client, any existing PPP client, such as pptp, may be used.

After you create an account at a VPN service, install client and connect, all your traffic starts to come though a safer path. At least it can't be directly watched by ISP or any 3-rd parties en route.

It's not too hard to find other free and/or commercial VPN services; if the ones mentioned do noy satisfy your needs.

Security is a state of mind

Security, talking of Internet security, is a complex thing. It can't be reduced to a single one-time action.

If you plan to surf securely, you should take the following into account:

  • cookies: if you accept them, your privacy may be jeopardized; the best way is to not store any cookies
  • JavaScript or other scripting language) may break your privacy and report a number of private data to the site; disable JavaScript for better security
  • Java and/or any other active content may as well report your real-life data to the server; disable for better security
  • your public identity, such as using email address you are using on «insecure» sites may help to decipher your identity and disrupt privacy altogether

Thus, safer surfing means less convenient interface, many a sites' functions not available.

«Convenience, security, reliability — choose any two».

VPS: security doesn't like defaults

| | No TrackBacks
Secure your computers

VPS: more control

Shared hosting has its limits. Even if the server isn't crowded, the restrictions and obvious security threats (co-hosted sites might be able to access your data; a system administrator mistake may expose your sites data to third parties etc) might force you to choose another type of hosting.

VPS (virtual private server), also called VDS (virtual dedicated server) is a virtual server, isolated from other virtual servers running on the same physical server. The prices of VPS are very reasonable and this hosting solution looks good if you can't stand the limitations of shared hosting any more.

Most VPS providers supply a control panel capable to monitor network and other activity of VPS, restart/stop/reinstall it and so on.

You're in control. However, more power means more responsibility. If you choose VPS, you should either take security precautions or hire people able to do that.

Defaults are dangerous

Apart from weak passwords for the root user, generated automatically, VPS is vulnerable in many other aspects.

First, it has a number of services instaleld and running, and the default settings of the services (such as mail — SMTP, POP3, IMAP4 — servers; HTTP server and so on) can expose some details of your setup to third parties. But, most important, the versions of software used to run these services may be out-of-date and insecure.

Default settings are just that: an average set o parameters, enough to work with a service. They aren't aimed at stronger security (since the ultimate security means the lowest possible convenience).

So, unless you wish to witness your brand new, yours-only VPS becoming a source of problems (such as becoming listed at SpamHaus or other public blacklists), losing your data and overall control over the VPS, you can't postpone hardening security. In fact, this should be done before you install and enable your Web servers or whatever network services you planned to run on your VPS.

The first things to do with your VPS

If anything below looks too complex or incomprehensible to you, I suppose you should consider hiring a system administrator or the related services, to take care of your VPS.

Shut down unnecessary services. Don't run whatever is run by default just because it's set up to run by default in your VPS. Refer to the installed OS manuals to determine what services are absolutely necessary; and what aren't. Disable the latter.

Install and tune firewall software. If using Un*x-like system, make sure you install iptables/whatever is popular properly. If using Windows, don't use its internal firewall, since it's very limited in capabilities. Use kind of a personal firewall to have full control of both inbound and outbound traffic. Use whatever can control traffic in all directions under the OS of your VPS.

Tune firewall according to approach: everything not allowed explicitly is forbidden. Don't forget to leave access to SSH (Un*x) or whatever is used to access the VPS console, otherwise you'll have to re-install your VPS.

Use non-standard ports for services granting access to console. It's the simplest means to ward off those trying to get into your system.

Install all the latest security updates for the software you are using. The default VPS installation is most probably containing the out-of-date versions of many software pieces.

The steps above do not provide you with absolute security; they are just basic steps one should take to make VPS protected better. Security is state of mind and a strategy; to maintain its high level you should follow a number of actions on a regular basis.

Safe networking: monitoring traffic usage

| | No TrackBacks
Inside a datacenter

Firewall is not enough

Safety means security; nowadays, when cyberspace is full of threats for every entity residing in it, from personal computers to large corporate networks — it is necessary to be warned about every undesirable or suspicious activity.

Several years ago a personal firewall and a malware shield, such as Spybot, along with anti-virus software was enough to feel safe, when talking of personal computers connected to Internet.

If even a small company's intranet should be safe, the precautions are essentially the same however, you should also monitor the activity of every workstation and, as DoS attacks become a common event, intrusion detection systems, such as Snort.

However, passive means of defense aren't enough. A piece of malware having infected a workstation may cost very dearly to all the company. Active monitoring is required and, if a suspicious action is detected, immediate response should be an option.

A number of software pieces installed on an average Windows workstation could include many a program that could be a security threat: instant messengers are a good example.

Traffic control and analysis

Traffic should always be under control, especially in a case when you are not using an unlimited plan, when every megabyte transferred should be paid for. A runaway system update, even an open browser may result in a high traffic usage.

However, it is not enough just to meter the amount of data transferred and, when necessary, to limit individual connections' rate, or firewalling the traffic gobblers altogether. Any unexpected transfers, especially via non-standard ports should result in immediate alerts. Even an attempt to use unexpected connections with outer world should be handled at once.

In such a cases, monitoring all the inbound and outgoing traffic becomes a must; the optimal policy should be restrictive — «everything not allowed explicitly is forbidden». This approach may save you from handling consequences of too much traffic consumed or private data leaking.

Most modern routers, as well as other network devices such as network adapters allow using SNMP to access traffic details remotely.

Thus, the minimal set of security precautions related to traffic analysis looks like this:

  • firewall, installed on both gateways to extranet and on every server, preventing undesired connections and handling the minor types of network attacks
  • intrusion detection software, looking for patterns of possibly dangerous activity
  • anti-virus and anti-malware software installed on mail server(s) and every personal computer, to detect harmful data transfers
  • network monitor software, keeping track of all traffic overall usage and dynamics

And once again: using passive only, post factum security means isn't enough to prevent a number of network disasters.

Microsoft vs Waledac: spammers under attack?

| | No TrackBacks
waledac-botnet.gif

Microsoft has been given the green light by a Virginia court to cut off 277 internet domains associated with Waledac botnet, one of the ten biggest networks of compromised computers in the US and a major source of spam and malware.

The fact that the software giant decided to attack spammers is reassuring. However, all the other condition equal, those who abide by law are much more restricted in actions than those who don't.

Botnets generated approximately 90% of all email spam. However, it is not that obvious that Waledac shutdown would result in noticeable reduce of spam flow. Most companies and organizations express skeptical views about the results of this assault. "It's one of the smallest botnets out there, and the court order appears to have had very little effect on its output," says Matt Sergeant, anti-spam researcher with Symantec's MessageLabs.

The irony is Microsoft is the corporation that releases the operating system used on the absolute majority of infected computers comprising botnets.

It is becoming obvious that no amount of legal actions could defeat the cybercrime, spam included. The principles of computer security should be clear to everyone, and everyone should understand the immediate and distant consequences of neglecting the basic rules of security.

However, as The Federal Drive comments, "even though 90 percent of school administrators believe cybersecurity education is crucial, only about of third of them said their schools offered classes"

To defeat botnets, all the computer users, worldwide, should be taught how to behave in cyberspace, what rules should be followed, and, most important, the users must follow the security rules. No exceptions.

The task looks next to impossible.

So the primary question remains: will the attack on a botnet mark the era of cleaner Net, or all those actions will only perform an artificial selection of botnets, making them to evolve into something much more stronger and dangerous?

Let's remain optimists.

About this Archive

This page is an archive of entries from March 2010 listed from newest to oldest.

February 2010 is the previous archive.

April 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.