Recently in Network services Category

Social networks: beware of imitations

| | No TrackBacks
A man wearing a  mask

Social engineering in action

Social engineering was, is and will be one of the most powerful tools to make people disclose whatever is required from them. A very typical example is mentioned in a TechCentral.ie article, telling how a security expert broke into TV star's Facebook account. The scheme itself is so simple and efficient, that it is unlikely it will ever stop working.

To those unwilling to read the mentioned article: the idea is to imitate an account of someone close to the person you wish to trick. Whereas a human being nowadays might be wary against trusting a stranger, it is so obvious to trust someone you know. And if that person's personal data, including photos, bio details etc are freely available on the Net, it is possible to trick anyone into believing you are dealing with the person you might know very well.

The age of trust references

The main problem of modern Net is the massive loss of trust. People have to prove their identities, we are taught not to trust whoever we encounter in cyberspace. Personal photos, scans of documents and so on can at times be found easily, and that makes all the identity proof actions more and more unpleasant.

The problem is not to determine the identity of a person contacting you for the first time. It is a common practice to place phone calls, to ask for utility bills and so on to make sure you are dealing with exactly the person you think you are dealing with.

The problem is this sequence of identity proof actions may become a must every time you contact anyone on the net. What are the means to prove one's identity?

Physical presence is almost always impossible.

Papers and other physical documents. The only problem is they should be somehow tested for validity, and it is impossible to make use of notary services every time you need to confirm who you really are.

Digital signatures (such as GPG/PGP key pairs) are good, if the owner doesn't lose keyphrase and makes sure to revoke the key quickly if it leaks to a third party.

Biometrics data may seem very reliable, but it's very inconvenient and expensive to check that data wherever a human being identity is moot.

The loss of anonymity is an important issue, as well. We are being told the anonymity is evil, yet the reality demonstrates the problem is not in anonymity, but in incompatibility of most human society norms and laws with cyberspace.

Currently a number of means are used to check the people are actually the ones they claim to be — phone call and so on. However, if someone impersonates your close friend and you trust that person (trusting their Internet presence), the trust into the real person will be affected as well.

The primary damage of cybercrime is we are being persuaded not to trust whatever we see on our screens.

Conclusion

There always are means of conatct a person seldom or never shares with all the world. Only with most trusted people.

Also, in case a real trust is to be created, you are better to meet another human being personally. At least once.

In all other cases I can only suggest to remain wary if you are contacted by a person that claims to be a friend or relative of yours, until you contact him or her by other means (those personal mentioned above) and make sure cyber-image belongs to known physical entity.

And the rule of thumb remains the same: do not seed your personal information, unless you know what you are doing.

After all, the loss of trust could be much more grave when compared with any other consequences.

Blogging on: watch your blogs heartbeat

| | No TrackBacks

Typewriter

Blogs: uptime is important

Blogs aren't just «personal diaries» any more. They are being used to generate income; thus blogging became a business. Just like any kind of business, blogging may suffer a lot if blogs become slow to response, cease to be available and exhibit a number of other undesirable traits.

It's hard to watch even a single blog if it comes to keeping it clean. A number of parameters should be within acceptable bounds: page loading speed, documents presence (no '404' errors) and so on. Blogger should be able to notice and moderate incoming comments and trackbacks, make sure the announces of new posts are spread through statusphere etc in time and so on and so forth.

Most of blog' status data may be obtained by calling a specially crafted URL (using GET method) or posting data to an URL (POST method). In other words, retrieving such data and analyzing those may serve as a good means of blog health monitoring.

If a blog experiences a problem, it must be identified and handled as soon as possible. Blog's downtime may cost very much, thus one should monitor all the data reflecting blog's status.

Monitoring: looking for an optimal tool

Network monitoring isn't a new idea; there are hundreds of tools, services and so on. The question is what to choose and why.

First, you could think of monitoring services such as mon.itor.us.

Pros: they often perform monitoring from a number of locations; they are often inexpensive or free; they are easy to set up.

Cons: these service often lack a number of useful features: variety of network protocols one could use; no convenient means of alerting (such as via SMS and/or Jabber messages). Also, if you need to monitor a restricted access data, you could have to grant access to such info to the service. It may be inappropriate.

Second, you could install server-side monitoring software, such as Nagios.

Pros: powerful, efficient; the functionality such software can often be extended using plug-ins and so on.

Cons: you require certain amount of knowledge in operating systems and related areas to install, run and troubleshoot such pieces of software. Also, such software often lack convenient user-interface.

Third, you can use desktop monitoring software such as IPHost Network Monitor.

Pros: desktop monitoring software is often easy to use and easy to understand. It can be extended using well-known monitoring products such as Nagios. Also, you could access restricted areas of your site by providing correct credentials to the configuration data of the monitored resources.

Cons: the piece of software must work as long as possible to gather good amount of service stats. Also, it can cause high CPU load and consume significant amount of RAM and disk memory.

Conclusion

No service/product is ideal, in fact. The optimal setup could make use of two or more of the mentioned choices

For example, if a network of server/services could supply their data to the desktop software, it could expand capabilities of both.

Until then, desktop implementation of network monitoring software seems the best option for the absolute majority of bloggers wishing to be notified first, when their blogs experience any problem.

A door with no keyhole: port knocking at work

| | No TrackBacks
Door knocker

Standard vs. non-standard ports

The first problem system administrator face when setting any server accessible from Internet is to handle all the 'hackers' attacks, those aimed at gaining access via SSH and other means.

My experience shows that even if domain name was never exposed on the Net, it starts being under constant «knocking» almost immediately.

First, it is annoying.

Second, if there are users with weak passwords, it's constant security risk. No matter how much are the users held responsible, system administrators will have to clean the mess.

The simplest approach, when intruder is banned (firewalled) by IP after certain number of connection attempts isn't the ideal solution: IP addresses may also be spoofed, and a malevolent user may effectively block legitimate access from a given IP(s).

One of simplest solution is to use non-standard port numbers for standard services. They should be chosen wisely, in order not to interfere with other well-known services using the same port number. Also,when and if the «secret» port is leaked, we return to the original problem and should either choose another port number, or use another approach altogether.

Port scanning can also nullify the efficiency of non-standard ports, if connection isn't restricted to only known IP addresses («strict whitelisting»).

Port knocking

There's a solution, named port knocking. In short: it means the process/person wishing to connect, first tries to connect to a known sequence of ports («knocks»); if it knocked correct, its IP is granted right to connect to, say, SSH port.

The method is both simple and complex, has its advantages and disadvantages.

To implement port knocking in case of, say, iptables (netfilter) a special module can be used. Otherwise, a special daemon process may monitor firewall logs and, when detecting knocking, add (temporarily) a rule allowing connections for the knocking IP. The rule should be removed when the connection session ends.

The major disadvantage is this: if the watching daemon dies and/or starts to malfunction, you are effectively locked out of your server. Also, the knocking attempts may be recorded in logs in different order (although this is not a major problem, if the exact sequence isn't important).

My further experiments allowed me to admit that a «hybrid» security approach can be used in most cases.

First, I add permanent rule to allow connections from certain IPs (namely, the IPs of computers I mostly work at). That saves me from the worst scenario, when port-knocking daemon process dies for some reason and can't be automatically restarted.

Second, I set up a daemon process monitoring port knocking access able to restart/reload the control process if necessary.

Third, I use non-trivial unused unassigned port numbers.

Fourth, to prevent my administrative IPs spoofing I use encryption to prevent IP spoofing (to ensure it is me having initiated the connection).

So far, the method allows to ward off most of «knockers» wishing to find a weak password.

Perhaps you could suggest even more secure, yet still reliable and not-too complex scheme?

Reliable email service: keep everything in order

| | No TrackBacks
email_icon.jpg

Email is still important

Email was one of the first communication means invented. Since the very first days of Internet, email as and remains one of the most convenient tools of communication.

It remains one of the most important and quick means of communication, regardless of all the latest plagues like spam. Hence the task is to make this service as reliable and efficient as possible.

This means all the underlying services must be reliable and efficient.

What should we watch

The term «email», in fact, is more than ability to receive and send messages.

To receive email, we would typically use POP3 or IMAP4 protocols. A Web-mail could also be used to work with email (Webmail is often an additional service communicating with underlying SMTP and POP3/IMAP services).

To send mail, we should make use of SMTP.

The mail is often scanned for possible threats (viruses, malware etc). Thus, a virus scanner is required.

The known ports should be open and available for connections.

To receive email,one should usually authentication. Thus, the authentication service(s) must also be available.

If you keep remembering what must be functioning properly, the list of these resources might as well reach several dozens. Server load, disk space, ability to connect to outer world... and so on and so forth. So the phrase «make email service work reliably» assumes quite a lot of monitoring all the underlying services and resources.

The solution

The solution is simple: all the services, resources etc involved must be monitored; and any possible problems with any one of them must be addressed as soon as possible. The question is how to perform the monitoring, what tools to use.

Two days ago we have announced the availability of a major update of IPHost Network Monitor software.

The way we are ensuring our corporate email services works flawlessly is to monitor all the services and resources mentioned above. SMTP, POP3/IMAP4 services, connectivity, server load, disk space, mail queue and all the underlying services must be available and respond quickly enough. The task is to set up the delays and response times that should be seen as appropriate.

An important note: in case you wish to monitor and troubleshoot email service, it is advised to use alert type different from emailing the administrator. To email administrator about email problems might result in no alerts received.

Talking of myself, I prefer Jabber alerts (for less critical events) and SMS (for most critical ones).

Google and all the Buzz

| | No TrackBacks

Google Buzz

Since Tuesday, 9-th of February, 2010, Google Buzz became the focus of many an IT posts, reviews and studies. The Net is full of 'forecasts' like 'Google Buzz is the Twitter killer', and the like. Popular news blogs, such as Mashable, began to collect opinions, carry out polls and use the new network service, adding still more to their popularity.

I have also tried the new service. In case you haven't heard (which is very unlikely) or haven't tried yet (there can be a number of reasons not to dive into the Buzz), here are my thoughts about the service.

First, it's not "yet another" Twitter, nor it is aimed to the same niche in the statusphere. Buzz doesn't limit you with 140 characters. You may post much, much longer posts. And you can edit posts (a significant advantage over the statusphere services). And you can attach (insert) multimedia files, insert links and use basic text formatting (bold, italic and underlined styles).

Second, it is integrated into Google Mail. Yes, it means you should have a GMial account to make use of Google Buzz. It means there will be not that many people changing their loyalty in favor of Google Buzz and abandoning such services like Facebook.

Third, Google Buzz' developers are quite quick to react when user-supplied criticism is provided. If you visit your Buzz page every day, you will notice changes. The notorious "your contacts are seen by everyone by default" security issue has been addressed very quickly. The users feedback is well-monitored and security issues are not left unnoticed.

The ado about Google Buzz is understandable. The previous service of Google, Google Wave, wasn't that much of a hit, especially since it was so slow and cumbersome. The Buzz is working quickly, especially when Google Chrome is used.

One of most amazing features of Google Buzz could be its ability to offer you, whom to follow. The 'autofollow' feature has been quickly disabled (users got displeased, so to say, to become befriended with people they don't know), and it seems that the developers of the service are using an enormous flood of feedback to make important changes literally on the fly.

To me, the Buzz is yet another translation and group communication service. Until the following are implemented:

  • RSS feed to comments
  • Email notifications of new comments/posts
  • Google Buzz for Google Apps (for GMail at users' domains)
I won't be too eager to pay Google Buzz too much attention. Yet I think many a new feature will be added rather quickly.

If the service is indeed improved/enhanced quickly, that alone can make those users reluctant to start using the Buzz change their mind. Alas, Facebook and Twitter aren't quick at all when users' suggestions about enhancements are in question.

By the way, have you noticed that all the links posted at Buzz timelines are without the infamous rel="nofollow" attribute? I wonder, will that stay for long?

Reblog this post [with Zemanta]

About this Archive

This page is an archive of recent entries in the Network services category.

Network monitoring is the previous category.

Security is the next category.

Find recent content on the main index or look in the archives to find all content.