Recently in Network monitoring Category

Monitoring security

| | No TrackBacks
Security camera

Is the security for real?

Do you keep the security level high enough?

The question seems simple. But if the answer is positive, the next answer will be "What is high enough?"

The security camera on the photo is a mock-up. An imitation, used to deter possible criminals. It doesn't watch anything, even if it's installed. Are your security systems for real, is the monitoring you perform for real? The answer is not obvious, even if you have installed a network monitoring software piece and know exactly, what to monitor.

Nowadays, it's not enough to monitor the servers alone, whatever services on them you watch. The fact the service is available and replies with expected data doesn't mean it's in good state.

For example, if the pieces of software installed is of old version and can be compromised, the security is weak. So monitoring vital system themselves is not enough.

Hidden flaws of security

The problem of using insecure software isn't limited with insecure, out-of-date functions. At times less secure configuration can be a possible reason of system malfunction of failure. How to determine there are flaws in configuration?

The two aspects of problem have no fully automated solution. Several pieces of software do have a mailing list or other means to notify of out-of-date components or security threats. Most, however, do not. The only means to be in course of events is to follow all the news on security-related forums and software sites and react immediately to every new threat published.

Talking of studying log files, it's relatively easy to detect how much time a given string is found in a given log file (say, ssh log file registers all login attempts, so if anything strange happens, it's better to be notified as soon as possible).

Also, a good advice is to install intrusion detection piece of software such as Snort, update its rules on a regular basis and use its notification features to measure the security risk level and/or signal alerts.

Monitoring software may be the dashboard of all your security setup; it's relatively easy to report all the important news to the single command centre and raise a relevant alert condition when necessary.

Security isn't a one-time set of action. It's philosophy, discipline and everyday, routing work on researching the security world and being alerted prior to the possible flaws are exploited.

Keep it simple, sage

| | No TrackBacks
jigsaw-keyhole.png

Reinventing the wheel

Simpler solutions aren't always obvious. When looking for a way to monitor a parameter, one is always tempted to re-invent the wheel, e.g. to create a custom script every time the exiting monitor types are not supported.

Definitely, it's not the best idea. First of all, it takers time. Second, it might be an inefficient solution. Third, it might be non-portable: if you wish to cerate the same monitor type for another host, you might have to create a similar script from scratch.

When it comes to monitoring the simplest parameters of the server, such as CPU load, memory usage and so on, there is no need to create a complex scheme of running remote scripts/applications and reporting data back to the monitor.

The magic abbreviation is SNMP, Simple Network Management Protocols. Let's explain, briefly, how it can be used to monitor a number of system-level parameters of a server. A Linux-powered server is assumed, although in this given case most of other operating system can be monitored through the same facility.

SNMP

SNMP daemon isn't running by default; refer to longer how-tos such as Monitoring server performance for more details on installing the daemon.

Take care when setting up the daemon: it can support a number of protocols; if you don't wish to handle security-relate issues when using version 3of SNMP, you may use v1 or v2c, but keep in mind their security level is but basic, and if you don't restrict, by other means, who is granted access to the daemon (restrict to localhost, if monitoring from the same server), you are virtually giving all the important data to whoever wish to gain unauthorized access to the server.

Try to restrict access to read-only, there is hardly need to grant write access to monitoring software.

It's easy to find the OIDs (object identifiers) of the data you wish to monitor; i.e., to allow viewing general system information such as RAM usage, grant access to .1.3.6.1.4.1 hierarchy.

Most of the data you could use are numeric; thus, the SNMP-based monitors available at IP Host Network Monitor software can be used to create very precise monitors able to reflect the level of resource usage without creating sever-based scripts and communicating with them.

Write access

Note that certain variables (OIDs) can be writable, thus allowing to control, to some extent, the device your monitoring software is  connected to via SNMP.

Note only that SNMP is supported by many devices, such as routers, and it can be used, say, to programmatically restrict or even close access to them/set usage limits base upon parameters being monitored. Say, you can restrict or limit transfer speed for ethernet cards if a data transferred cross a limit. However, it is strongly advised that the network monitoring software does never modify any device's settings it monitors.

Data cobweb: let computers talk to computers

| | No TrackBacks
Matrix screen

Inhuman interface

Monitoring software often deals with human-readable data. Whether you are monitoring a Web site availability, or a search results output, you have to parse data and analyse it. Although it may seem to fit all the needs, it isn't so.

To begin with, Web sites most often are indeed created for human beings. I.e., content is generated to suit human visitors, to make them comfortable, to represent data in a manner most useful for human readers.

Also, these pages may be quite complex and their creation involves a number of services, such as database engines. In case a monitor watches regular Web pages, the site engine performs quite a lot of useless actions. To satisfy a monitor, simpler and much less resourceful approach could be used.

Raw and pure data

I suppose you understood the idea: let's generate simpler, shorter Web page prepared for monitoring software, for computer processing. First, we won't use that many resources; second, we can arrange data in a manner that makes their analysis much more efficient.

The implementation may differ. Myself, I use a single string that contains a number of characters, every one for a service I plan to watch. Processing them becomes a simple task; and if the such a 'state summary' isn't generated for every request, it can be further time and resources saving.

Now imaging monitoring other services. Opening ports for them, or creating tunnels, secured connections able to traverse firewalls and/or other restrictions can be very unsafe and boring. Instead, we can use the same approach: generate state reports as simple files that can be accessed via other protocol, such as HTTP, and contain enough data to notify the monitoring piece of software of all important state changes.

An additional security can be achieved, if the data sent are encrypted, and/or if a special data representation is used, obscuring the actual data received. If this encrypted report page is cached and only generated

You will make all the data read-only, thus preventing any possible unauthorized data access, or allowing the monitoring software to read intrinsics of your site.

You will only have to parse "usual", human-readable pages in case you monitor somewhere else' sites.

Common traits in network monitoring and DoS

| | No TrackBacks
The Siege of Antioch

Pros and cons of monitoring

The disadvantage of not monitoring network resources is obvious: in case of any connectivity or functionality problem, you are not warned, thus a number of unpleasant consequences may follow.

All right, so we accept we need monitoring. The next question is, whether this can harm your resources (site, for example) in any way? The not so obvious answer is yes, it can.

Let's imagine we have a site and let's analyze who accesses it and how the site can react to this.

Bots, your zealous readers

Who's reading your site, mostly? Unless yours is a popular blog with thousands of readers, the answer is: bots. Search spiders, RSS bots, whatever else. Human readers can at times be in severe minority.

Spiders can be a nuisance, especially new ones, mostly misbehaving. They do not always obey robots.txt rules, thus can exhaust your bandwidth or overload server, and cause many other malfunctions. However, these often supply their identifier ('User-Agent' header) and can be told from all the other readers, especially if IP addresses of crawlers are known.

Faceless monitoring

Here we come to a very interesting conclusion: if a monitoring process doesn't behave like a browser, or at least like a well-known bot, if it sends no HTTP headers and connects to a server periodically and at known time intervals, it can be viewed as harmful.

Anonymous spiders, those harvesting email addresses and other information, are seldom written in a well-behaving manner. They do not respect robots.txt (of course),they have a habit of overwhelming server with requests.

In other words, if your monitoring process doesn't introduce itself with User-Agent, does access the site too often and doesn't throttle data transfer speed, it can be viewed as data harvester or even an attacker. A common DoS is exhausting bandwidth, when a number of processes request large pages many a time in quick sequence.

So, if you plan to use monitoring HTTP/HTTPS resources, make sure that
— your monitoring software supplies HTTP headers, providing unique id
— your monitoring software does throttle the connection speed and does not exhaust too much of channel speed
— your monitoring software adds small random intervals toits polling schedule

Monitoring tools: learn more from reinventing the wheel

| | No TrackBacks
Handmade tools

Standard tools in non-standard situations

Monitoring tools like IPHost Network Monitor provide a set of built-in monitor types to handle most routine tasks.

They are sufficient to handle the majority of monitoring needs; it is often necessary to make sure a given type of service is accessible without actual checking.

However, a number of tasks may require something, not present in the distribution. What is the beat approach in such a case

A real-life example: you would like to monitor users' inbox sizes and warn them, if those inboxes grow too large.

One of approaches could be to make a script that ssh's to the mail server, runs 'ls' program at a given mailbox and returns its size in whatever units required.

However, if mail inbox directory can be accessed as a network share, the task could be solved using built-in file size monitor type. Moreover, if the monitor is named after the mailbox user's part, the alert can be built in such a manner so that it could notify the mailbox' owner of the quota limit trespassed.

The wheel reinvented

There can be several solutions to a problem. For example, to check a given user's presence at a site we could analyse that user's profile page and look for a text pattern.

However, it would mean we stress not only the database the data are taken from, but a Web server as well. When monitoring a site it is often a good idea not to overwhelm the site with monitoring itself. Too fast loading Web pages may create an unnecessary stress on the site.

The above task could be solved by a slightly different means: monitoring a SQL database. If a proper SQL query is constructed, its first column returned is interpreted as a performance value. Thus, most of SQL-driven sites activity may be monitored just by creating the correct SQL query, able to return a numeric result.

To sum up: when monitoring task may be solved by checking files size, result of a SQL query, string found in HTTP(S) response - it's better to solve the task in that manner.

Custom scripts can be used when a task, not easily reduced to the cases above, must be solved. E.g., when a file of a given size, modification time and access permissions must be present in a known location.

And, finally, a good suggestion: if you are using a monitoring tool and failed to solve a task by available means, it's always worth contacting the developer(s) and suggesting a new monitor type.

How often could you encounter a network monitoring task when built-in means of the monitoring tool you are using aren't enough?

Network monitoring: simple means to count traffic

| | No TrackBacks
Traffic view

Traffic under control

The traffic I am talking about isn't the one webmasters like so much. It's traffic in general, the one called bandwidth in hosting-related discussions.

There is no such thing as unlimited bandwidth, so the ability to control traffic flow and react to its surges or any predefined amounts consumed is very important.

In almost every situation the quicker is response to traffic consumption, the better. SNMP, or simple network management protocol, provides means to handle certain traffic-related tasks. Those that can be handled by using IPHost Network Monitor. Let's be more specific.

Counters and triggers

Traffic counters are provided by a number of SNMP-supporting devices related to networking. Those include routers, network adapters etc.

One of the typical tasks is to notice traffic surge and take measures — i.e., shut down the line preventing too much traffic consumed, and notify the administration.

The task is both simple and complex/ Although SNMP-enabled devices have the so-called traffic counters (OIDs looking like .1.3.6.1.2.1.2.2.1.10.N, for inbound counters and .1.3.6.1.2.1.2.2.1.16.N for outgoing ones), they have one significant feature: these counters, in general case, can't be reset by means of any API/software tool. They can only grow and wrap around a predefined value (depending on whether they ar 32- or 64-bit long).

However, a simple script, taking these counters and calculating an average/totals can be implemented quite easily, so we can use the following means to set up a simple traffic control.

First, we create such a script and call it periodically from the IPHost network Monitor, to receive an integer value indicating traffic consumption. The script posts a numeric string to a standard output indicating traffic consumption.

Second, we create a custom alerting rule, using «Set SNMP value» alert; since the field controlling a network interface state are read-write, we can effectively block all the traffic through an interface by means of a single SNMP value change.

Now we can shut down an interface.To re-enable it, we can create another rule played when the traffic state becomes normal (i.e., the script mentioned returns an acceptable value), where we set the mentioned flag value to a state when traffic flow is enabled.

Caveats

There are three versions of SNMP supported. Roughly speaking, the greater is the version, the more means of access control it provides. Please make sure proper authentication is performed before SNMP commands (such as «set value») are performed. Using SNMP v1 is strongly discouraged; anyone able to modify SNMP values will have the full control over device's adapters and access to a bunch of information about device's settings..

Also, do not forget to test all the scripts on a «sandbox» device, the one you can manage in whatever manner you like without affecting real-life data and/or devices.

You shouldn't shut down the interface you are using to connect to the device. After that, no control is possible, until you manage to connect through an another interface and/or reset the device.

Please note also that the case above is the simplest and not really useful example on hwo to control traffic flow. In real life a number of users uses a network device and selective actions are expected when a user exhausts the traffic quota assigned to them.

Keeping mailbox tidy: a pile vs. folders

| | No TrackBacks
A pile of letters

A discipline of correspondence

A habit to keep everything tidy isn't inborn. It is being taught, and lessons aren't always pleasant.

Email is an important means of communication and a great tool of organizing all the workflow. If you wish to know how much a person is accustomed to a good discipline of mind, just look at how the person's mail is being kept.

An immense, almost never cleaned up inbox file is what is most commonly seen. A pile of letters, both important and volatile, personal and business ones. Unless forced to move all the message to relevant folders, such people would never do that themselves.

Big mailboxes aren't good; apart from the fact they contain mostly garbage, as time goes by, they slow down system and start to become a security issue as well.

Inboxes are temporary storage place, this should be taught to every person using business email boxes. Let's see how our network monitoring software might help to introduce this concept into the mind of every employee.

To see, but not to read

If a Un*x type of server is used to run a mail server, the overall security may not be compromised. To monitor individual mailboxes, we should use an approach when

  • the process checking inbox sizes may not read the inbox files themselves
  • checking doesn't assume granting any additional privileges
  • users must be warned individually when/if the inbox size are too big

This task is simple.

First, let's create a domain user (since we are using NT domain), or use an existing domain user having been created for technical purposes.

Second, create a network share on the mail server, visible only to the mentioned user, and operating in read-only mode. The share must provide a read-only access to the inbox directory.

Third, create a host in IPHost Network Monitor you are using to monitor your intranet and add individual monitors for every mailbox we plan to watch. A monitor is of «file» type and refers to the inbox file name.

The last and most important part: create a custom alert and use the monitor name (matching the user part of an email box) to send an alert to when the condition (file is bigger than a given value) is met.

Now you can not only watch the mailboxes won't grow too big, but alert the users themselves.

Can you offer a simpler solution?

Planning security: breaches in the wall

| | No TrackBacks
A breach in a wall

The weakest link

People are the weakest link in any security concept.

This is why security should be viewed as an ideology, not just several actions performed once.

A common mistake is to make the outer bounds of security as solid as possible, leaving internal areas virtually insecure. In other words, the intranet might be as shielded against any possible intrusion from outer Internet as possible, but intranet itself might be very vulnerable.

If there are Windows workstations, they should be all shielded as well. Antivirus software, personal firewall, antimalware — mandatory bricks the security wall consists of. All the security updates must be applied as soon as possible; unless there's a real need, software must only be installed by system administrator.

In short, the optimal approach is this: plan the security concept, as if every computer included in your intranet is accessible from Internet by anyone, and no guarantee may be provided its users(s) are all careful and literary to avoid all the possible dangers.

A Trojan horse «caught» by a user on its intranet computer might nullify whatever security measures have been taken so far. And this is just one of examples of how vulnerable becomes a network if even a single computer lacks appropriate security measures applied.

An ideal security concept

The most important, most fundamental principles I build my security concepts upon are given below, just to give an impression.

Nothing is safe enough. Not a single computer, network device, whatever else participating in intranet. No device may be considered safe. Apply the same security hardening steps for every piece of your intranet.

No default values. Nothing may be left as programmed by default, if it should be used for real. No factory-generated passwords, no default names (identifiers), if they may be changed.

All the passwords must be young: this is what all the users hate: inventing new passwords regularly. To teach people simple ways of creating strong passwords isn't an easy task, yet any weak password may jeopardize the entire security.

Make backups often. Backup everything. Every piece of data, related to setting any service, piece of hardware etc. Backup often, keep copies in several places. Remember that people belong to one of two classes: those who haven't lost any data of importance yet, and those who make backups on regular basis. Log files should also be remembered, since they can serve as proof of whatever happened, fi you need to investigate anything later.

Any change is suspicious. Literally: any. Change of password, performed from unusual place. Change of IP addresses the user(s) used to log in from. Change in Internet resources visiting pattern. And so on. Acquire a habit to study every change that looks even a bit unusual.

No precaution might be superfluous. Again, literally: no precaution actions may be considered superfluous, extra, unnecessary,weak..

What important parts of the concept I might omit?

Windows security: no right, no fright

| | No TrackBacks
A crown

Defaults aren't best

According to a report by BeyondTrust, 90 percent of Windows vulnerabilities can be mitigated by configuring users to operate without administrator rights. In other words, the less powerful is the user, the less unpleasant consequences follow.

System administrators do not grant privileges easily: the more is the number of privileges, the more powerful is the account, the more destruction it might cause. Many an action, relatively safe when no privileges are available, become very dangerous otherwise. However, all the personal PC users are tempted to grant their main users all the rights. Yes, it is simpler: no need to logout and log in as an administrator, if it is necessary to install or upgrade software and so on.

In total, 64 percent of all Microsoft vulnerabilities reported last year are mitigated by removing administrator rights. The default state of an operating system when user has too much access has become very dangerous.

No power or superpower

The security model of Un*x means you have either all the rights possible (if you use a root, superuser account), or, roughly speaking, you have no rights at all (if you use a regular user).

The very idea of using privileged user all the time must be discarded and strongly advised against.

It is well known that there are two types of people: those who have never lost important data, and those making regular backups.

Of course it's joke, yet it bears an important truth: the less time you spend restoring lost or corrupted data, the more time you have for more important tasks.

So, the conclusion is:

  • avoid assigning users any privilege unless it's absolutely inevitable
  • use privileged users to perform local administrative tasks only

Have you experienced any problems working under a privileged user account?

Network monitoring basics: teach your sites to express themselves

| | No TrackBacks
Spyglass - Watch your sites

Sites are important

The more you are involved into Web presence, the more important are sites you maintain.

The exact type and function of site(s) isn't important. They all must be available all the time, and the more complex they are, the harder is to keep tracks of their state.

Even if you keep a blog, run under the ubiquitous Wordpress (or any other engine), there are many things to watch.

All the pages must be available, they should load quickly.

There should be no unmoderated comments: when visitors spot definite spam comments, they may leave your blog for good.

And so on and so forth. What is the easiest means to make sure the site is available?

Monitoring tools

There are dozens of monitoring tools. You might have heard about Nagios, or any other common tools. The only problem is these tools require your deep knowledge of operating system intrinsics. And those developed for broader audience should be powerful and multifunctional ones. This is why I offer you to test IPHost Network Monitor.

OK, now you have a tool at hand. What shall you check?

Special pages for special cases

The obvious solution is to load a page — say, a start page, to make sure it works.

However, it is not the best possible solution.

Monitoring assumes the page is checked on a regular basis. Important resources should be checked often. If a site is complex and updated many times a day, very such check means wasting resources of the server.

If you run several instances of a monitoring tool, to ensure the connectivity is checked as well, the load will grow respectively.

One of solution is to load a blank — or at least a static page. However, that will tell your monitor nothing about all the other subsystems. On a properly set up site there are many services that should be available: DBS, email system, Web services and all its components. Perhaps many other services as well. Loading blank page won't tell you if your site feels well.

The best solution in such a case is to make a special page, displaying short, but essential information about all the subsystems. It can be generated by a standalone process that checks the health of all the site's components and displays an easily understandable string, In such a case all you need to do is to find that string in the page loaded. And the page itself might be very small.

Do you have your own solutions for this tak? How would you make sure the site works fine?

About this Archive

This page is an archive of recent entries in the Network monitoring category.

Network administration is the previous category.

Network services is the next category.

Find recent content on the main index or look in the archives to find all content.