February 2010 Archives

Network monitoring: means to prevent Web site problems

| | No TrackBacks
watchtower.jpg

During the last few days, I witnessed a number of discussions on forums, such as WebHostingTalk, when people encountered sudden site problems and were blaming hosting providers for unexpected downtimes.

What always makes me amazed is the fact most owners of sites do not think that problems are much easier to prevent, than to handle. Only a few seek to monitor the Web site activity, its ervices and their stability. I believe those few do never try to claim someone else' responsible for their sites' problems.

An average Web site nowadays is no more a collection of static pages, along with several communication and feedback features, like a guestbook. Scripting languages, rich content and high level of interaction are common for modern sites. The more is the number of features used, the more is the number of services vital for the site functioning.

Even if site is not a true Web 2.0 site (i.e., content isn't created by the site users), interaction is the basic feature that must always be available. The more interaction is assumed, the more care should be taken to keep an eye on all the site's features working as expected.

It is necessary to have an ability to be warned on every possible problem with the site (connectivity problem, high load time, certain pages unavailable and so on). Any problem with the site may affect adversely the whole Web site. It is simply impossible to do the monitoring by hand; the more complex and interactive a site grows, the more is a need in automated means of monitoring.

For example, the pages of crucial importance could be feedback/contact forms. If anything wrong is going on, the inability to inform site administrators could cost you quite a number of visitors. It is a rare case nowadays that email addresses are published on the site in explicit form, but even if they are and if email is processed by the same server the site lives on, any server problem means the site is effectively cut off any communication.

Thus, creating several monitors testing all the vital site features is a condition sine qua non, and the idea is: you should never be told by customers your site is in trouble. You should know of any possible problems first to take action quickly.

Twitter phishing: hunt for a whale

| | No TrackBacks
twitter_phishing.jpg

For several days Twitter users have been hit with a massive phishing attack.

The attack itself was very similar to the majority of other attacks of this kind. If you have spotted a message containing "This you????" string, and a shortened URL, you could become a victim as well. The link leads to a false Twitter site where you are offered to enter Twitter credentials to enter.

Needless to say that users that fell prey to this trick provided their Twitter accounts to the scam artists and the "phished" accounts were sending the dangerous messages further. I only have seen a dozen of such messages in my direct messages box on Twitter. Many of my friends have seen hundreds.

Phishing performed via email messages is now classic. Even though it finds their targets, it's a kind of evil we all know of. It's amazing that people well aware of email phishing, were careless enough to get caught by Twitter version of the same scheme.

Cyber threats evolve and change appearance all the time. Looks like the association "email - phishing" is so strong, that people do not suspect they can be tricked when they see a link in a strange private message.

Now imagine an account is taken from you by phishing in a facility (such as Facebook) that serves as an authentication means to a number of other sites (i.e., acts like an OpenID provider or similar means). The obvious consequence? Your mistake can cost you not only the Facebook account, but several others in other network services.

The conclusion is: if you are urged to click a link, think twice before doing that.

Note: there are sites like Artists Against 419, where a number of fake sites are listed and the lists are updated often.

China leads the world in hacked computers - the Age of Kraken dawns?

| | No TrackBacks
robot-army.jpg

McAfee, a Silicon Valley security firm, studying all the types of modern cyberthreats, states that China at the moment has the largest number of compromised ("hacked") computers. The estimated number provided is 1,095,000 computers in China (and 1,057,000 in the USA).

The computers are often work in so called botnets; a botnet is controlled from a hidden command center and can perform a number of actions. Among those are: sending spam (a compromised, 'zombie" computer may send hundreds of thousand spam messages daily), attacking other computers (i.e, performing DDoS attacks) and so on.

Kraken, a notorious botnet of 2008, united more than 400,000 infected computers. The malware that performed the infection and joined the computer to the botnet was capable of self-modification and evaded most of known antivirus and antimalware tools.

Infected computer may perform the malicious actions in a manner that can't be easily detected. The computers hacked are not all home computers; Kraken managed to infiltrate many corporate networks, passing by firewalls and other filters undetected.

There are hundreds of botnets in cyberspace. No amount of thorough monitoring and security precautions is enough to reduce the cyber-zombie army significantly: there are many factors that donate to the botnets growth.

First, installing the proper software to make a computer guarded against any type of malware requires certain level of knowledge and education.

Second, such products, able to address "zero day attacks" as well, aren't too cheap.

Third, a significant amount of unlicensed OS installations are vulnerable only because they aren't allowed to install security updates.

Botnets are often controlled by criminals. Since large botnets can do qutie a lot of damage (they can "shut down" virtually any site or other network service), their existence is a direct threat to any country; and the more country depends on Internet, the more vulnerable it becomes in terms of cyber-threats.

You may learn more from a Damballa, a company dealing with botnet threats. But to be able to withstand the threat, one simple condition must be net: all the computer users should be literate enough to guard their computers and to prevent donating to someone else' grief.

Botnet controlling scripts can be easily found and downloaded free of charge. This very fact should be impressive enough to think of personal computer security as of a conditio sine qua non.

Google and all the Buzz

| | No TrackBacks

Google Buzz

Since Tuesday, 9-th of February, 2010, Google Buzz became the focus of many an IT posts, reviews and studies. The Net is full of 'forecasts' like 'Google Buzz is the Twitter killer', and the like. Popular news blogs, such as Mashable, began to collect opinions, carry out polls and use the new network service, adding still more to their popularity.

I have also tried the new service. In case you haven't heard (which is very unlikely) or haven't tried yet (there can be a number of reasons not to dive into the Buzz), here are my thoughts about the service.

First, it's not "yet another" Twitter, nor it is aimed to the same niche in the statusphere. Buzz doesn't limit you with 140 characters. You may post much, much longer posts. And you can edit posts (a significant advantage over the statusphere services). And you can attach (insert) multimedia files, insert links and use basic text formatting (bold, italic and underlined styles).

Second, it is integrated into Google Mail. Yes, it means you should have a GMial account to make use of Google Buzz. It means there will be not that many people changing their loyalty in favor of Google Buzz and abandoning such services like Facebook.

Third, Google Buzz' developers are quite quick to react when user-supplied criticism is provided. If you visit your Buzz page every day, you will notice changes. The notorious "your contacts are seen by everyone by default" security issue has been addressed very quickly. The users feedback is well-monitored and security issues are not left unnoticed.

The ado about Google Buzz is understandable. The previous service of Google, Google Wave, wasn't that much of a hit, especially since it was so slow and cumbersome. The Buzz is working quickly, especially when Google Chrome is used.

One of most amazing features of Google Buzz could be its ability to offer you, whom to follow. The 'autofollow' feature has been quickly disabled (users got displeased, so to say, to become befriended with people they don't know), and it seems that the developers of the service are using an enormous flood of feedback to make important changes literally on the fly.

To me, the Buzz is yet another translation and group communication service. Until the following are implemented:

  • RSS feed to comments
  • Email notifications of new comments/posts
  • Google Buzz for Google Apps (for GMail at users' domains)
I won't be too eager to pay Google Buzz too much attention. Yet I think many a new feature will be added rather quickly.

If the service is indeed improved/enhanced quickly, that alone can make those users reluctant to start using the Buzz change their mind. Alas, Facebook and Twitter aren't quick at all when users' suggestions about enhancements are in question.

By the way, have you noticed that all the links posted at Buzz timelines are without the infamous rel="nofollow" attribute? I wonder, will that stay for long?

Reblog this post [with Zemanta]

Amazon S3: now with per-object versioning enabled

| | No TrackBacks
aws_logo.gif

Amazon Webservices (AWS) has announced two days ago a new feature for their Simple Storage Service, namely versioning. Every object (file), stored in a bucket with versioning enabled, may have an arbitrary numbers of versions stored. Every version may be either retrieved or irrevocably deleted at any moment.

S3 has already been used for versioning; namely, mounting it as a file system and storing the actual version control systems repositories "on the cloud". The high reliability of S3 made it possible to handle that task using such means as s3tools and/or Jungle Disk to synchronize with cloud storage or use it directly.

However, using version control system assumes a certain level of experience and knowledge of how the version control systems work. The new feature makes it possible to simply overwrite current versions of data stored on the cloud without a risk of losing earlier versions.

Thus, this long-expected feature contributes to greater data safety and, even though currently just few existing command-line utilities (such as aws) are able to control versioning feature, this feature can result in greater variety of end-user tools making use of cloud storage, to ensure data are safe and secure.

The obvious drawback is all the versions are still kept in the same bucket (thus raising the maintenance/storage price); also, there's no obvious way to restore a definite version of a number of files at once; however, it can be implemented with little overhead.

You can find more details on the new feature in the Amazon Simple Storage Service developer guide.

About this Archive

This page is an archive of entries from February 2010 listed from newest to oldest.

January 2010 is the previous archive.

March 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.