Recently in System administration Category

Network monitoring: simple means to count traffic

| | No TrackBacks
Traffic view

Traffic under control

The traffic I am talking about isn't the one webmasters like so much. It's traffic in general, the one called bandwidth in hosting-related discussions.

There is no such thing as unlimited bandwidth, so the ability to control traffic flow and react to its surges or any predefined amounts consumed is very important.

In almost every situation the quicker is response to traffic consumption, the better. SNMP, or simple network management protocol, provides means to handle certain traffic-related tasks. Those that can be handled by using IPHost Network Monitor. Let's be more specific.

Counters and triggers

Traffic counters are provided by a number of SNMP-supporting devices related to networking. Those include routers, network adapters etc.

One of the typical tasks is to notice traffic surge and take measures — i.e., shut down the line preventing too much traffic consumed, and notify the administration.

The task is both simple and complex/ Although SNMP-enabled devices have the so-called traffic counters (OIDs looking like .1.3.6.1.2.1.2.2.1.10.N, for inbound counters and .1.3.6.1.2.1.2.2.1.16.N for outgoing ones), they have one significant feature: these counters, in general case, can't be reset by means of any API/software tool. They can only grow and wrap around a predefined value (depending on whether they ar 32- or 64-bit long).

However, a simple script, taking these counters and calculating an average/totals can be implemented quite easily, so we can use the following means to set up a simple traffic control.

First, we create such a script and call it periodically from the IPHost network Monitor, to receive an integer value indicating traffic consumption. The script posts a numeric string to a standard output indicating traffic consumption.

Second, we create a custom alerting rule, using «Set SNMP value» alert; since the field controlling a network interface state are read-write, we can effectively block all the traffic through an interface by means of a single SNMP value change.

Now we can shut down an interface.To re-enable it, we can create another rule played when the traffic state becomes normal (i.e., the script mentioned returns an acceptable value), where we set the mentioned flag value to a state when traffic flow is enabled.

Caveats

There are three versions of SNMP supported. Roughly speaking, the greater is the version, the more means of access control it provides. Please make sure proper authentication is performed before SNMP commands (such as «set value») are performed. Using SNMP v1 is strongly discouraged; anyone able to modify SNMP values will have the full control over device's adapters and access to a bunch of information about device's settings..

Also, do not forget to test all the scripts on a «sandbox» device, the one you can manage in whatever manner you like without affecting real-life data and/or devices.

You shouldn't shut down the interface you are using to connect to the device. After that, no control is possible, until you manage to connect through an another interface and/or reset the device.

Please note also that the case above is the simplest and not really useful example on hwo to control traffic flow. In real life a number of users uses a network device and selective actions are expected when a user exhausts the traffic quota assigned to them.

Keeping mailbox tidy: a pile vs. folders

| | No TrackBacks
A pile of letters

A discipline of correspondence

A habit to keep everything tidy isn't inborn. It is being taught, and lessons aren't always pleasant.

Email is an important means of communication and a great tool of organizing all the workflow. If you wish to know how much a person is accustomed to a good discipline of mind, just look at how the person's mail is being kept.

An immense, almost never cleaned up inbox file is what is most commonly seen. A pile of letters, both important and volatile, personal and business ones. Unless forced to move all the message to relevant folders, such people would never do that themselves.

Big mailboxes aren't good; apart from the fact they contain mostly garbage, as time goes by, they slow down system and start to become a security issue as well.

Inboxes are temporary storage place, this should be taught to every person using business email boxes. Let's see how our network monitoring software might help to introduce this concept into the mind of every employee.

To see, but not to read

If a Un*x type of server is used to run a mail server, the overall security may not be compromised. To monitor individual mailboxes, we should use an approach when

  • the process checking inbox sizes may not read the inbox files themselves
  • checking doesn't assume granting any additional privileges
  • users must be warned individually when/if the inbox size are too big

This task is simple.

First, let's create a domain user (since we are using NT domain), or use an existing domain user having been created for technical purposes.

Second, create a network share on the mail server, visible only to the mentioned user, and operating in read-only mode. The share must provide a read-only access to the inbox directory.

Third, create a host in IPHost Network Monitor you are using to monitor your intranet and add individual monitors for every mailbox we plan to watch. A monitor is of «file» type and refers to the inbox file name.

The last and most important part: create a custom alert and use the monitor name (matching the user part of an email box) to send an alert to when the condition (file is bigger than a given value) is met.

Now you can not only watch the mailboxes won't grow too big, but alert the users themselves.

Can you offer a simpler solution?

Planning security: breaches in the wall

| | No TrackBacks
A breach in a wall

The weakest link

People are the weakest link in any security concept.

This is why security should be viewed as an ideology, not just several actions performed once.

A common mistake is to make the outer bounds of security as solid as possible, leaving internal areas virtually insecure. In other words, the intranet might be as shielded against any possible intrusion from outer Internet as possible, but intranet itself might be very vulnerable.

If there are Windows workstations, they should be all shielded as well. Antivirus software, personal firewall, antimalware — mandatory bricks the security wall consists of. All the security updates must be applied as soon as possible; unless there's a real need, software must only be installed by system administrator.

In short, the optimal approach is this: plan the security concept, as if every computer included in your intranet is accessible from Internet by anyone, and no guarantee may be provided its users(s) are all careful and literary to avoid all the possible dangers.

A Trojan horse «caught» by a user on its intranet computer might nullify whatever security measures have been taken so far. And this is just one of examples of how vulnerable becomes a network if even a single computer lacks appropriate security measures applied.

An ideal security concept

The most important, most fundamental principles I build my security concepts upon are given below, just to give an impression.

Nothing is safe enough. Not a single computer, network device, whatever else participating in intranet. No device may be considered safe. Apply the same security hardening steps for every piece of your intranet.

No default values. Nothing may be left as programmed by default, if it should be used for real. No factory-generated passwords, no default names (identifiers), if they may be changed.

All the passwords must be young: this is what all the users hate: inventing new passwords regularly. To teach people simple ways of creating strong passwords isn't an easy task, yet any weak password may jeopardize the entire security.

Make backups often. Backup everything. Every piece of data, related to setting any service, piece of hardware etc. Backup often, keep copies in several places. Remember that people belong to one of two classes: those who haven't lost any data of importance yet, and those who make backups on regular basis. Log files should also be remembered, since they can serve as proof of whatever happened, fi you need to investigate anything later.

Any change is suspicious. Literally: any. Change of password, performed from unusual place. Change of IP addresses the user(s) used to log in from. Change in Internet resources visiting pattern. And so on. Acquire a habit to study every change that looks even a bit unusual.

No precaution might be superfluous. Again, literally: no precaution actions may be considered superfluous, extra, unnecessary,weak..

What important parts of the concept I might omit?

Windows security: no right, no fright

| | No TrackBacks
A crown

Defaults aren't best

According to a report by BeyondTrust, 90 percent of Windows vulnerabilities can be mitigated by configuring users to operate without administrator rights. In other words, the less powerful is the user, the less unpleasant consequences follow.

System administrators do not grant privileges easily: the more is the number of privileges, the more powerful is the account, the more destruction it might cause. Many an action, relatively safe when no privileges are available, become very dangerous otherwise. However, all the personal PC users are tempted to grant their main users all the rights. Yes, it is simpler: no need to logout and log in as an administrator, if it is necessary to install or upgrade software and so on.

In total, 64 percent of all Microsoft vulnerabilities reported last year are mitigated by removing administrator rights. The default state of an operating system when user has too much access has become very dangerous.

No power or superpower

The security model of Un*x means you have either all the rights possible (if you use a root, superuser account), or, roughly speaking, you have no rights at all (if you use a regular user).

The very idea of using privileged user all the time must be discarded and strongly advised against.

It is well known that there are two types of people: those who have never lost important data, and those making regular backups.

Of course it's joke, yet it bears an important truth: the less time you spend restoring lost or corrupted data, the more time you have for more important tasks.

So, the conclusion is:

  • avoid assigning users any privilege unless it's absolutely inevitable
  • use privileged users to perform local administrative tasks only

Have you experienced any problems working under a privileged user account?

Network monitoring basics: teach your sites to express themselves

| | No TrackBacks
Spyglass - Watch your sites

Sites are important

The more you are involved into Web presence, the more important are sites you maintain.

The exact type and function of site(s) isn't important. They all must be available all the time, and the more complex they are, the harder is to keep tracks of their state.

Even if you keep a blog, run under the ubiquitous Wordpress (or any other engine), there are many things to watch.

All the pages must be available, they should load quickly.

There should be no unmoderated comments: when visitors spot definite spam comments, they may leave your blog for good.

And so on and so forth. What is the easiest means to make sure the site is available?

Monitoring tools

There are dozens of monitoring tools. You might have heard about Nagios, or any other common tools. The only problem is these tools require your deep knowledge of operating system intrinsics. And those developed for broader audience should be powerful and multifunctional ones. This is why I offer you to test IPHost Network Monitor.

OK, now you have a tool at hand. What shall you check?

Special pages for special cases

The obvious solution is to load a page — say, a start page, to make sure it works.

However, it is not the best possible solution.

Monitoring assumes the page is checked on a regular basis. Important resources should be checked often. If a site is complex and updated many times a day, very such check means wasting resources of the server.

If you run several instances of a monitoring tool, to ensure the connectivity is checked as well, the load will grow respectively.

One of solution is to load a blank — or at least a static page. However, that will tell your monitor nothing about all the other subsystems. On a properly set up site there are many services that should be available: DBS, email system, Web services and all its components. Perhaps many other services as well. Loading blank page won't tell you if your site feels well.

The best solution in such a case is to make a special page, displaying short, but essential information about all the subsystems. It can be generated by a standalone process that checks the health of all the site's components and displays an easily understandable string, In such a case all you need to do is to find that string in the page loaded. And the page itself might be very small.

Do you have your own solutions for this tak? How would you make sure the site works fine?

Blogging on: watch your blogs heartbeat

| | No TrackBacks

Typewriter

Blogs: uptime is important

Blogs aren't just «personal diaries» any more. They are being used to generate income; thus blogging became a business. Just like any kind of business, blogging may suffer a lot if blogs become slow to response, cease to be available and exhibit a number of other undesirable traits.

It's hard to watch even a single blog if it comes to keeping it clean. A number of parameters should be within acceptable bounds: page loading speed, documents presence (no '404' errors) and so on. Blogger should be able to notice and moderate incoming comments and trackbacks, make sure the announces of new posts are spread through statusphere etc in time and so on and so forth.

Most of blog' status data may be obtained by calling a specially crafted URL (using GET method) or posting data to an URL (POST method). In other words, retrieving such data and analyzing those may serve as a good means of blog health monitoring.

If a blog experiences a problem, it must be identified and handled as soon as possible. Blog's downtime may cost very much, thus one should monitor all the data reflecting blog's status.

Monitoring: looking for an optimal tool

Network monitoring isn't a new idea; there are hundreds of tools, services and so on. The question is what to choose and why.

First, you could think of monitoring services such as mon.itor.us.

Pros: they often perform monitoring from a number of locations; they are often inexpensive or free; they are easy to set up.

Cons: these service often lack a number of useful features: variety of network protocols one could use; no convenient means of alerting (such as via SMS and/or Jabber messages). Also, if you need to monitor a restricted access data, you could have to grant access to such info to the service. It may be inappropriate.

Second, you could install server-side monitoring software, such as Nagios.

Pros: powerful, efficient; the functionality such software can often be extended using plug-ins and so on.

Cons: you require certain amount of knowledge in operating systems and related areas to install, run and troubleshoot such pieces of software. Also, such software often lack convenient user-interface.

Third, you can use desktop monitoring software such as IPHost Network Monitor.

Pros: desktop monitoring software is often easy to use and easy to understand. It can be extended using well-known monitoring products such as Nagios. Also, you could access restricted areas of your site by providing correct credentials to the configuration data of the monitored resources.

Cons: the piece of software must work as long as possible to gather good amount of service stats. Also, it can cause high CPU load and consume significant amount of RAM and disk memory.

Conclusion

No service/product is ideal, in fact. The optimal setup could make use of two or more of the mentioned choices

For example, if a network of server/services could supply their data to the desktop software, it could expand capabilities of both.

Until then, desktop implementation of network monitoring software seems the best option for the absolute majority of bloggers wishing to be notified first, when their blogs experience any problem.

Can you spell 'password' or The weakest link in security

| | No TrackBacks
Binary digits - viewing file

Strong password, weak password

Are your passwords strong?

An average Internet user registers at dozens of services; and almost everywhere one has to invent a password. There are few alternatives to good old password verification, some of them are useful, some are not so.

«Convenient, reliable, cheap: choose any two». To have your data secure and access them quickly, it is necessary to use authentication method both simple and reliable. So what are the typical approaches in creating a good, strong yet easy to remember password?

I say «easy to remember», but passwords, at times, need not be remembered literally. The method I use to invent hard to guess and easy to remember password is simple: several words are used, interspersed with commas, dots etc.; and a group of digits is placed somewhere inside. The words may be taken out of any source: the idea is to keep that source mostly unknown to everyone else.

I won't post long and solid recommendations on how to invent a password. There are many already on the Net; and if you're out of ideas how to make a password, there are many well-known pieces of software solving that task — starting with pwgen.

The question is, how to make many passwords without using the same password for several types of services (very insecure; if one of those leaks password somehow, a number of your other resources will become very vulnerable.

A true crypt for passwords

Well yes: I write passwords down to a file and store the file at hand. Insecure? Not quite.

First, I use Truecrypt freeware to generate encrypted volumes. A good means to store one's passwords and any other sensitive information. Yes, it also requires a password to mount Truecrypt volume; I also use keyfiles; if I lose my flash drive where a Truecrypt volume is stored, it won't be deciphered in absence of a keyfile.

The passwords are recorded in a file, but not in their plain form. I use mnemonic rules and tokens (to store only those parts of passwords that can be restored). A good approach as long as I remember the password for the encrypted volume.

Open your ID

OpenID can be a good alternative to storing many passwords: if a service supports OpenID authentication, you only have to remember the password for your OpenID provider account. There may be many. Google's user profile URL may serve as OpenID; LiveJournal account, Chi.mp and many other services, including openid.net

There are only two problems with OpenID solution of the password problem.

First, you need a reliable OpenID provider. If it's gone or changes owners, you can as well bid farewell to resources where you've used that OpenID identity.

Second, if you lose control over your OpenID account, you can lose all the resources where you did authenticate with it. It's like placing all the keys on a single keyring: it may be a good way to have all the keys at hand and lose them all as well.

Myself, I prefer to use either my own OpenID server or well-known provider, the one that will most probably stay with as for many years.

Other doors, other keys

OpenID isn't the single alternative to name and password combination. There are other protocols and techniques (say, OAuth), but in all of them you should remember at least one password. And it's better be long and really hard to guess.

It should be understood that you can't afford neglecting security the moment you have any sensible data on your computer or on the Net resource.

Weak password means a number of troubles, sooner or later (most probably sooner).

Security isn't a oine-time action. It's a mental discipline, the way of thinking. Create a reliable means to store passwords, to restore them; make sure they are all different; change them often — these mantras will remain important for good. yet not every people on our planet abides by them.

What is your way of making your data protected?

VPS: security doesn't like defaults

| | No TrackBacks
Secure your computers

VPS: more control

Shared hosting has its limits. Even if the server isn't crowded, the restrictions and obvious security threats (co-hosted sites might be able to access your data; a system administrator mistake may expose your sites data to third parties etc) might force you to choose another type of hosting.

VPS (virtual private server), also called VDS (virtual dedicated server) is a virtual server, isolated from other virtual servers running on the same physical server. The prices of VPS are very reasonable and this hosting solution looks good if you can't stand the limitations of shared hosting any more.

Most VPS providers supply a control panel capable to monitor network and other activity of VPS, restart/stop/reinstall it and so on.

You're in control. However, more power means more responsibility. If you choose VPS, you should either take security precautions or hire people able to do that.

Defaults are dangerous

Apart from weak passwords for the root user, generated automatically, VPS is vulnerable in many other aspects.

First, it has a number of services instaleld and running, and the default settings of the services (such as mail — SMTP, POP3, IMAP4 — servers; HTTP server and so on) can expose some details of your setup to third parties. But, most important, the versions of software used to run these services may be out-of-date and insecure.

Default settings are just that: an average set o parameters, enough to work with a service. They aren't aimed at stronger security (since the ultimate security means the lowest possible convenience).

So, unless you wish to witness your brand new, yours-only VPS becoming a source of problems (such as becoming listed at SpamHaus or other public blacklists), losing your data and overall control over the VPS, you can't postpone hardening security. In fact, this should be done before you install and enable your Web servers or whatever network services you planned to run on your VPS.

The first things to do with your VPS

If anything below looks too complex or incomprehensible to you, I suppose you should consider hiring a system administrator or the related services, to take care of your VPS.

Shut down unnecessary services. Don't run whatever is run by default just because it's set up to run by default in your VPS. Refer to the installed OS manuals to determine what services are absolutely necessary; and what aren't. Disable the latter.

Install and tune firewall software. If using Un*x-like system, make sure you install iptables/whatever is popular properly. If using Windows, don't use its internal firewall, since it's very limited in capabilities. Use kind of a personal firewall to have full control of both inbound and outbound traffic. Use whatever can control traffic in all directions under the OS of your VPS.

Tune firewall according to approach: everything not allowed explicitly is forbidden. Don't forget to leave access to SSH (Un*x) or whatever is used to access the VPS console, otherwise you'll have to re-install your VPS.

Use non-standard ports for services granting access to console. It's the simplest means to ward off those trying to get into your system.

Install all the latest security updates for the software you are using. The default VPS installation is most probably containing the out-of-date versions of many software pieces.

The steps above do not provide you with absolute security; they are just basic steps one should take to make VPS protected better. Security is state of mind and a strategy; to maintain its high level you should follow a number of actions on a regular basis.

How To: Unblock Executables On Vista

| | No TrackBacks
Executable files downloaded to your Vista system may cause a security warning ("The publisher could not be verified/Are you sure you want to run this software?"). To prevent that dialog windows from appearing (in case you think the file is safe), follow these steps:

1. Open Explorer, locate the file, and select 'Properties' after right-clicking.

2. Switch to 'General' tab.

3. CLick 'Unblock' button and confirm by clicking 'OK'.

To disable this feature for all the files, use Group Policy. Navigate to
User Configuration - Administrative Templates - Windows Components - Attachment Manager
and enable policy named  "Do not preserve zone information in file attachments".

How To: Troubleshoot Group Policy

| | No TrackBacks
There's a quick guide on how to troubleshoot GPO-related problems.

1. Open Event Viewer, look for Administrative Events in Custom Views. Look for Group Policy events there and read the details.

2. While in Event Viewer, look under
Applications and Services Logs\Microsoft\Windows\Group Policy\Operational
filter records using event ID, look for patterns in events,read details.

3. While viewing the operational log mentioned above,study policy processing sequence, to find out the failure points and corresponding error codes. Use that to determine at what step and what policy record malfunctions. Use the information to block policy processing until everything works as expected.

About this Archive

This page is an archive of recent entries in the System administration category.

Security is the previous category.

Find recent content on the main index or look in the archives to find all content.