June 2009 Archives

Monitoring Websites: additional tips

| | No TrackBacks

Site outgoing traffic is often limited, especially if it is not a big portal or mall. Thus it wouldn't be wise to select 'heavy', big HTML page to check, it could, under certain circumstances, drain the site of its traffic quota sooner. An ideal solution would be to have specially crafted pages, light in size and causing as little site load as possible (e.g. with few or none database requests and the like).

Another point is sending appropriate headers. In modern Web applications a number of checks may be made to prevent automated access to the system - a common means to ward off such scripts as email harvesters, automated reply bots and the like. Thus, not only sensible information should be passed in headers of the request, but also one should make sure that the POST request is done, if necessary, after the GET request is sent first, and pass whatever cookies are set.

Finally, I would be careful not to send any type of monitoring requests too often. That could not only load site too much, but could also trigger a spam or DoS alert on the site. So, idea is to verify the site presence with lighter tools, such as PING, wherever those are available, and only occasionally send the GET or POST requests.

A polling interval of 60 seconds could be enough for any real-life task, and if necessary, pinging could be performed several times a minute, and only after that a real HTTP request would be sent.

The resume is: to check the site state, one should choose lightweight page(s), perform occasional polling/requesting pages, setting headers as expected and testing the page contents, without an assumption that the error code would be really returned if an error occurs.

Monitoring Websites: optimal strategy of monitoring

| | No TrackBacks

Nowadays many people run a site, and often there are many sites a person has - such as blogs, personal cards, portals and so on. Site downtime is th type of event that could cost too much to its owner, especially if that's well-known site, e-shop or any popular network service. The ability to receive early warning about all site's problem can save much time and other resources to the webmaster.

I should add also, that every site nowadays is not just a server sending static pages; all the Web is now highly dynamic. It means that in fact every Web site consists of one or more Web applications.

There are no standards on the Web applications efficiency, required response time and so on; however, it's a common practice to consider the site response within 1 second a very good time, and the site response within 5-10 seconds an appropriate response time. Thus, there shiuld be means to set the necesary response time to monitor the site, and in every given case to adjust the timing accordingly.

Web application is the one that uses HTTP to communicate. I have used a number of miscellaneous Web applications and I should stress that one should be able to monitor not only GET, but also POST and, perhaps, other type of requests as well. GET types are most common; they are enough to monitor the site main page and Web application default entry. However, if I am to make sure that a feedback form, or any similar kind of facility is working as expected, I should be able to send POST request as well and test the results.

Of course one should be well aware of the logic of any given Web application, since GET requests are often used to retrieve data, and POST requests are used to apply changes. I.e., if a form is known to send, say, an email, precautions should be taken to prevent automated email sending by testing a POST request on a page.

I should also add that checking error codes returned isn't enough, since many Web applications from, say, CMS group may send send custom response along with error code (404 or 503 or anything else) and/or redirect to a special page handling missing pages, unauthorized access attempts and so on. To detect that a page wasn't found one should check the returned page as well, possibly looking for a string (pattern) to detect abnormal situations.

IPHost Network Monitor is able to make use of all the mentioned tools and techniques to detect performance and/or accessibility degradation of a Web site.

To illustrate how HTTP monitoring is done, let's create a test monitor within IP Host Monitor. To start, click on 'New Monitor' button, select 'HTTP(S)' and click 'Next':

Enter URL of the site to watch, say http://amazon.com/:

Now the newly created monitor appears in the left pane. Click on it and let's do some tuning

You can change 'Notes' in Monitor section, to have detailed title in the reports you'll receive, but first of all let's set poll time in Dependency and Polling Interval section. It's hardly useful to test it too often, since the page loaded is highly dynamic and may be not too quick to load over busy networks. Let's set it to 600 seconds.

Further, in Monitor Definition we can use string matching to effectively cause failure condition, to test alerts. Visit Amazon site and choose any non-obvious string from Amazon Daily Blog, say EyeClops. Now set 'Response Validation' dropdown to 'Match String' and in the field below print the string to watch for, "EyeClops" (without quotes).

In the Action section set action to whatever means of alert notification you wish.

Now let's change the defaults in Availability Monitoring and Performance Monitoring sections. The site may load not too quickly, so let's set Down State Timeout to 20 seconds, Performance Warning level to 5 seconds and Performance Alert level to 10 seconds.

That's all, for a start. Now right-click on the monitor in the left pane and click on 'Start'. Watch the initial messages in the Log view.

...now let's return to this monitor in 20 minutes or a bit later and inspect the Logs:

You have just created your first monitor using IPHost Network Monitor software.

About this Archive

This page is an archive of entries from June 2009 listed from newest to oldest.

May 2009 is the previous archive.

November 2009 is the next archive.

Find recent content on the main index or look in the archives to find all content.