OS Upgrade: Windows 10

Published Sep 30, 2014 by

Microsoft’s Windows operating systems encompass the vast majority of the computer market. However, their current operating system (OS) has been met with dislike and is generally seen as dud. So it’s not entirely surprising to see the company roll out a new OS only three years after the release of Windows 8, ditching the 5-year intervals we’re used to seeing from Microsoft.

Release Information

There is no official release date yet for Windows 10. The only word from Microsoft is to expect the operating system to be available in 2015. The Microsoft conference on September 30th allowed Microsoft to divulge more information about Windows 10, as well as an early release version available starting Wednesday.

New Features and Redesigns

One of the largest complaints about Windows 8 is the lack of a start menu. While not necessarily a new feature, the start menu will be back in Windows 10. This is one of the few things Microsoft has released about the operating system. The released screen shots (see below) show a start menu that is a hybrid of Windows 8’s start screen and the classic start menu seen in Windows 7 and earlier.

Windows 10 Start Menu

Screenshot showing the redesigned Windows 10 start menu.

Microsoft’s Cortana will also be a part of Windows 10. According to their commercials, Cortana is presented as a response to Apple’s Siri. It is a voice responsive program that can conduct searches, provide reminders, and activate other programs.

Metro Apps will be run inside the traditional Windows desktop. This means that those applications run from the Windows 8 start screen—the ones many users don’t know how to close because there is no red X—will be run in the familiar windowed format. All the windowed options will be there too; so you can close it with the red X, you can minimize the program, reposition it, and resize it.

Learning From Windows 8’s Mistakes

Other features for Windows 10 are confirmations that some of the features of Windows 8 will not be there. Besides the start menu, Windows 10 will at least have an option to disable the start screen, if it is not done away with entirely. The Charms bar (the icons that appear in Windows 8 when the mouse is resting at either right-hand corner) will be removed. There are rumors that the options accessed in the Charms bar will be accessed by a new button located next to the Minimize, Maximize, and Close buttons seen on the upper-right of a windowed application.

Cost

There is no official word on how much Windows 10 will cost, only speculation. Based on current software pricing trends, and especially comparing how Apple has treated their OS updates, a lot of speculations believe that Windows 10 will not cost a lot. Some even go so far as to suggest that it will be free. Another thought is that the new OS will be subscription based, following what Microsoft offers with Office 365.

What This Means To You

If you have an older computer, especially if it’s running Windows XP or Vista, upgrading to Windows 10 when it is released will be a great plan. Windows 10 is expected to rise from the ashes of Windows 8’s disgrace and flourish; much like Windows 7 did compared to Vista. As will all new major software releases, it’s safest to wait a month or so after Windows 10 is officially released. If you use any custom software, always make sure it is compatible with the new system before switching.

How can I keep my data safe *and* accessible?

Published Jun 12, 2013 by

Your business has data vital to its operation.  Think about it for a moment: if all of a sudden you lost, or were unable to access, your most vital data, how would it affect your business?  Would it grind day-to-day operations to a halt, or would it be a minor inconvenience?  The answer to that question will help identify your data storage and backup needs.  Either way, it costs you time and can seriously affect your bottom line.

Resiliency and Redundancy

Resiliency and redundancy are two measurements that can be used to evaluate the effectiveness of a variety of technological systems.  In the case of data storage, resiliency describes the ability to avoid data loss through reliable hardware, software, and security measures.  Redundancy describes data duplication in the case where hardware, software, or security has failed.  Both are important to take into account when selecting storage solutions.

Ease of use

More important than either of these measurements, arguably, is “ease of use”.  You have to be able to use your “resilient” and “redundant” data.  If every time you want to access or backup a file, you have to enter a code to a vault and plug in an external hard drive, you will probably avoid that procedure any chance you get.  The best storage and backup solution you can have is one you do not have to think about to use.

Network Storage

There are many cost effective network storage devices available for the small-to-medium scale business that provide data resiliency, which we may cover specifically in another post.  On these devices, data is often split across several hard drives in such a way that loss of any single hard drive means no data loss.  Network storage devices provide ease of use for in-office data storage.  However, anyone who works out of the office will quickly realize their data is not as easily accessible as they may like.

Cloud Storage

There are numerous services available to store your data “in the cloud” (as with the network storage devices, I will leave evaluation of these services to another post).  Access to your files through these services has been made relatively easy these days, to the point the files almost feel like they are on your local machine.  Cloud storage solutions are excellent, until you do not have access to the Internet.  Just like Network Storage, if you do not have access to the network where the data is stored (in this case, the Internet) then you do not have access to the data.  Security is also certainly a concern on an external service, but stick with a company with a strong security track record and use a strong password for yourself and you will be pretty well set.

Combined Storage and Backup Solution

A great way to ensure the resiliency, redundancy, and ease of use of your data is to combine a network and cloud storage solution.  Data stored on an office network device can be automatically backed up to a cloud storage service for remote access as well as disaster recovery situations.  Remote users can also use the cloud storage to back up their devices.

Security

Whatever data storage solution you choose, do not forget about security: strong passwords, physical access restrictions (lock your network closet please), and encryption.  With data duplication comes an increased amount of potential access points to that data.  If you are storing any kind of personal or proprietary information, encrypt it.  TrueCrypt is an excellent (and free!) tool for file or whole drive encryption.  Data encryption adds a little more complexity to the mix, but can be as easy to work with as entering one more password in the beginning of each day.

 

Get started by thinking again about the importance of data to your business.  What is your storage solution today?  How can it be improved?  Talk to a Little Reed expert for advice.

Why do I need a Little Reed expert?

Published May 22, 2013 by

When you buy a computer, you want it to last forever.  What most fail to consider is how their use of a computer will change over time, or how the few mechanical components within the computer will begin to fail.  You may have purchased that laptop with the intent to use it for email and word processing, but someone introduced you to a cool productivity application or you started using it for high-definition movies.  The computer requirements for those sets of tasks may be very different, and what was adequate when you bought it may be less than adequate now.  As for mechanical issues: hard drives, case fans, power supplies, and optical drives are all points of potential failures.

You may have experienced issues with your computer that you simply put up with and attribute to the downside of computer ownership, or the fault of a particular operating system, when in reality these issues can be handled with a little help and regular maintenance.  Just like you wouldn’t drive 40,000 miles without an oil change (or you shouldn’t, at least), neither should your computer go without an occasional check-up and the resulting recommended maintenance.

Proactive Maintenance

Most people call for help when they encounter problems, but many of the problems you have can be avoided with proactive maintenance:

  • Running regular scans for malware (malicious software) and viruses can reduce the majority of problems individuals may encounter on an individual computer.
  • Keeping software up-to-date is another proactive maintenance step, and can eliminate glitches with software that an individual may not even be aware existed.
  • Disk and operating system cleanup is another task that should be performed regularly.  Scheduled defragmentation may be necessary depending on how the computer is utilized, and clearing temporary files can free up precious disk space in systems with less-than-desirable hard drive capacity.
  • Periodically evaluate changes in computer usage to preemptively determine when upgrades may be necessary

Reactive Maintenance

Reactive maintenance is less desirable than proactive maintenance, as it means some problem has already occurred that the user is aware of.  Reactive maintenance should not be needed as frequently if a proactive maintenance schedule is followed, but some issues such as mechanical failure cannot be avoided by proactive maintenance alone.

  • Loud disk drives or excessively noisy fans can indicate a failure or near-failure of the mechanical components in a computer system.  If these components fail, it can lead to much more costly repairs.  An overheated processor or disk drive that has worn bearings can cost significantly more than just replacing the components as soon as the problem is noticed.  Recovering data from a failed disk is not always possible, and processor replacement may not be cost effective.
  • Performance problems or issues with saving and retrieving files may also be related to gradual disk failure without the noisy warning of failed bearings.  This is a problem that cannot be avoided with proactive maintenance, and should be checked out before it turns into a loss of data.
  • Security warnings and excessive pop-ups may be indicative of malware that has gone unchecked, or updates that have failed to download for some time.  If left alone, these problems will continue to worsen, and can leave your computer vulnerable to further malware infestation and may lead to performance degradation.  Some malware may even attempt to mislead you into purchasing software you do not need under the false pretense that it will fix the problem you are having, the problem caused by that software in the first place.  Now you’re looking at not only repair cost, but the cost that you may have been duped into spending on unnecessary and potentially harmful software.

Hardware Assessment and Upgrade Planning

The final reason to call an expert is hardware assessment and upgrade planning.  After a few years, you may be using your computer for much more than you intended and it is probably time to reevaluate how it is being used.  If your computer is more than a few years old, there is a good chance the hardware could use some upgrading to meet with the new demands you’ve placed on it.  If your computer already seems to be on the edge of what it can do, and you are thinking about adding another piece of software, or you want to start using it for something more intense like 3d rendering or video editing, you need to explore your options.

You also need to remember that when you purchase a newer computer, or you choose to upgrade the operating system on your new computer, some of your older applications may not work.  Many outdated software packages were written for a particular operating system, and unless you have since upgraded that software, it may not work on a newer version of Windows or Mac OS.  This is less likely in a home setting, but many businesses are using applications written many years ago and have chosen not to update because the application works for them.  Just be aware that the old application and its data may require special consideration when upgrading the computer on which it runs.

Summary

Proactive maintenance is good, reactive maintenance should be minimal, and periodic assessment and upgrade planning is a necessity.  Don’t hesitate to call and allow us to help with setting up your proactive maintenance and upgrade road map, and avoid reactive maintenance and unnecessary downtime.

 

How do I recognize a fraudulent website?

Published May 13, 2013 by

The Internet is an extremely useful tool, one that provides a multitude of services and entertainment right in front of you.  It is also riddled with bad things, places that are trying to steal your information or trick you into giving away personal data.  The risk of encountering such things is minimized if you can learn to recognize signs that indicate a site should be avoided instead.

Attention to Detail

First and foremost, make sure the domain name in the address bar is what you expect.  If you clicked on a link that you expected to take you to google.com, but instead takes you to stealyourpassword.com, that’s probably not where you want to be.  If you click on a link to a website from what you thought was a legitimate email request from your bank, the difference in the URL may not be that obvious.  For example, the website of Credit Union West is cuwest.org, but a fraudulent link might take you to cuwestonline.org.  If you are unsure of what the correct domain for a company website should be, try using your favorite search engine and put in the company name. The first result will almost always be the official website. If you still find yourself at a website wondering if all is well, this is where browser security features come in to play. By looking at the address bar, there are a few ways to tell if a website is legitimate.

Browser Features

This is a valid URL. The domain name is what we expect (google.com), the website is using https, and it has been validated by the browser, as indicated by the padlock icon.

From top to bottom: Internet Explorer, Mozilla Firefox, and Google Chrome. This website is legitimate. The domain name is what we expect (google.com), the website is using https, and it has been validated by the browser, as indicated by the padlock icon, present on all three browsers (circled in red).

The presence of a globe in the address bar means the website is not authenticated in any way.  This does not necessarily mean it is a fake, it just means there is nothing outside of your own judgement to determine if the site is the one you are wanting to visit.  It also means anything you transmit to the website is not secured and could be read by an eavesdropper.  The presence of https:// in the address bar, and a grey or green padlock indicate that the website connection is secure, and the website has been validated by a third party service.  These are very strong indicators that the site you are visiting is the one you want. A green padlock in particular is the best and most sure way to confirm a website’s authenticity, as it means the site meets additional criteria for verification.

Stay Safe

These tips should make you more adept at spotting web forgeries, but as always it is better to play it safe when it comes to your personal information.  If at any point you still feel uncomfortable putting your information on the Internet, it is best to follow your gut: take care of your business in person, or over the phone using a number from official company material.

Why does my computer feel slower than it used to?

Published May 2, 2013 by

At some point, you have probably experienced the joy of a new computer.  It zooms through start-up, the internet loads in a blink, and opening applications can be done in less time than it took to click the mouse.  But now you have had the computer for a while, and it feels like a dinosaur.  You may even have enough time to leave and grab a cup of coffee while waiting for it to turn on.  This may seem common, but it certainly is not normal or even a result of the age of your computer.  Quite the contrary, the computer itself is probably running just as quickly as it did when you bought it.

File Disorganization

As you use your computer, the programs that you utilize will put lots of files into storage such as internet history, application settings, pictures that you import from your camera, or your music collection, just to name a few. As files are created, relocated, and destroyed, related information is no longer in close physical proximity in storage.  Imagine if your assistant filed everything you handed her in the first available file, regardless of what file it should actually go with.  When you needed to find something, it would take her significantly longer to find that file again, as it is not right next to other pieces of related information.  The same thing happens with your computer, and your accounting files, for example, would take longer to load simply because it took the computer longer to get each piece of the information you wanted.  This is known as disk fragmentation, and is not typically the leading cause of computer performance degradation, but is something to keep an eye out for.

Heavier Load

When you first used your new computer, you didn’t have everything you needed.  It is likely that you had to download some applications from the web, or install productivity software from a CD. Many of these applications aren’t just using your system resources when you can see them, but may leave behind running processes to check for updates or to reduce load time when you request the application.  This bit of trickery can make the individual application appear to load quicker, but may reduce the speed of other operations on your system.  Depending on the number of extra processes on your computer, this can account for a significant portion of the performance drop through processor or memory utilization.

In addition, too high of a system load will cause your computer to spend more time using your hard drive to keep things in temporary storage instead of the much faster RAM, referred to as “memory”, that it has.  This is because the more programs utilizing your computer, the less memory is available for everything, and the information that would be stored there instead is put to the hard drive.  Imagine if you are moving out of your house and you’ve loaded everything into an ultra-fast sports car with next to nothing for trunk space (this is memory).  It will get your stuff to the new house extremely fast, but if you have too many household goods, you’ll have to have your neighbor with the slow but larger moving van assist you (the hard drive).  So now you get to the new place in your sports car in ten minutes, but you spend another ten minutes waiting for the rest of your stuff to arrive.

Just like it would benefit you to sell some stuff before moving or get a faster moving van, it may be advantageous to get rid of some unnecessary applications that run in the background if possible, or upgrade the amount of memory available.  Some of these applications you may not be able to remove, but many of them fall into the category of unwanted software.

Unwanted Software

Malware and bloatware: these words are thrown around a lot, and while the applications themselves may not cause harm to your computer, they are annoying at the least, and will take up those precious resources we just discussed.  Malware is software that has some malicious intent, typically tricking the end user into believing there is a more serious problem and getting them to pay for something they don’t need or download additional malware, compounding the problem.  Bloatware is typically harmless software that is pre-installed on your computer, a result of computer and software manufacturers agreeing to load the computer with their products as a marketing technique.  The programs individually may not represent much of a problem, but if there is an excessive amount of them running in the background they can be the sole problem causing your computer to use the hard drive for temporary storage instead of memory.

Excellent, the "Free Physical Memory" amount listed is high.  If it drops to down into the low hundreds, something needs fixing.

Excellent, the “Free Physical Memory” amount listed is high. If it drops to down into the low hundreds, something needs fixing.

So what can I do?

To fix these issues, there are a few steps you can take.  To correct disk fragmentation, run the disk defragmentation utility that came with your operating system.  This will also give you an idea before you start of whether or not this represents a significant portion of your problem.  Most of the time, the utility will indicate a low level of fragmentation (a few percent) and defragmenting may not give you a noticeable performance increase.

As for resource usage, running the task manager under Microsoft Windows and viewing the “Performance” tab will give you at-a-glance information that indicates resource usage.  This should be done with all applications closed to understand the baseline performance of your computer.  The main areas to look are the “CPU Usage” graph, and the “Physical Memory (MB)” block.  If your CPU Usage is constantly above a few percent, or your free physical memory is in the low hundreds, it is time to check background applications and see what is running that can be removed. If your computer is already running only the programs that it needs, then a processor or memory upgrade may be in order.

Conclusion

Like a car, your computer will slow down under heavier load or when in need of basic maintenance.  Before you run off and purchase faster hardware, try to take care of what you have.  New hardware will certainly run faster under the same load just as a stronger car will, but with time, it too will slow down if you keep putting bricks in the trunk.

What makes a password great?

Published Apr 30, 2013 by

Monkey, ninja, baseball, football: Words related to hurling inexplicably fast objects towards unwitting onlookers?  Not this time.  These are just four of the top 25 most stolen passwords in 2012, as reported by splashdata.comStolen is the key word here.

Two methods for stealing login credentials are:

  1. Guessing the password based on general or personal information (birthdays, spouse’s name, child’s name, workplace surroundings)
  2. Cracking a web site or server, gathering huge lists of login credentials (often not even encrypted)

Method 1 is what I will call “local threats” where the intruder probably knows you, and at least has direct access to your workspace or public-knowledge-information.  Method 2 is a “remote threat”, where an attacker is not necessarily targeting you, per se, but rather a large segment of users in which you have been included.

To protect yourself from these methods, create a password that incorporates each of the following suggestions:

Make it a “Strong Password”

The generally accepted definition for “strong password” is one of at least eight characters in length with uppercase, lowercase, number, and special (.!@# etc.) characters.  This, however, is a bare minimum recommendation.  A good password uses as many characters as you are willing to remember, and although a “strong password” is a great defense, it is not the whole defense.

Make sure it has nothing to do with you or your workspace surroundings

Password on a Post-It (Original photo by Pavel Krok)

Remove that Post-It from the bottom of your keyboard right now! Go ahead, I won’t look.

This is an important step to follow in order to avoid password theft using method 1.  Assume everyone knows everything there is to know about you.  Now, choose a password with that in mind.  “St@nford98” might be considered a “strong password”, but when you hang your 1998 Stanford degree above your computer monitor, that may be one of the first things someone tries when they sit down at your desk.  Also, you may have the strongest, most random 26 character password in the world, but if you have it written down anywhere, someone will find it.

Make it easy enough to remember

Picking a word or phrase with an easily recallable number association is a great way to come up with a password, but make sure the word is an obscure reference that nobody would think to guess.  Pick an insignificant detail from a memorable event.  For example, I recently took my daughter to her first baseball game and we had lunch together there in the 7th inning: “H@mburgerInThe7th”.  It does not mean anything to anybody but me, and I can remember it fairly easily so I do not have to write it down.

Make it unique to the web site you are creating it for

This is a very important aspect to password security.  In the case that a web site has failed to properly secure your log in information, it is important that one compromised password does not compromise every account you own.  One method I have come up with provides two fail-safes to protect your password:  Copy and paste a portion of the domain name of the website you are accessing as the start of your already-strong password: my password for google.com becomes “gooH@mburgerInThe7th”, my password for twitter.com becomes “twiH@mburgerInThe7th”.

With unique passwords, a compromised Twitter password does not also result in a compromised Google password.  Also, physically copying and pasting that portion of the domain every time makes you look at the domain name.  That means if you are the target of a phishing attack at twtter.com (notice there is no “i”) and you cut and paste “twt” for the start of your password, not only will they not get your real Twitter password, but you will probably not attempt to log in when you realize you are not actually at “twitter.com”.

Change your passwords

Finally, with all these suggestions in mind, it is time to stop using “password1”.  Coming up with a good password for new log-ins  is great, but it does nothing to protect what is already out there.

How should I secure my wireless network?

Published Apr 25, 2013 by

If you have a wireless network, you may have asked yourself, “Is a wireless network secure enough for data sensitive work, like banking?”  This is an excellent question to ask, and the answer is conditionally: “yes, if you’ve adequately secured your connection.”  What, then, is adequately secured?

Unsecured Connections

Without enabling any of the security features on your wireless access point, any traffic between your devices and the network is sent without modification.  Anyone within range of your wireless network can listen to, or “sniff”, the signal that you’re using and record the data.  Everything that you transmit is out in the open, and while some institutions such as banks protect the information you send, it is still undesirable to have this information accessible to anyone within receiving range.   Additionally, there is nothing stopping an outsider from connecting to your network and having access to your connected technology resources such as printers and network file shares.  Given the right set of tools and time, anything you transfer could be available to those with malicious intent.  As a business, it may not only be your information that is vulnerable.  Customer payment methods, contact information, and proprietary company data are just a few of the pieces of information that you want protected.

What is encryption?

Encryption is the process of taking some type of information such as text or computer data, and converting it into a different, unreadable form.  The data is known as “plain text” until it changes form, and it becomes known as “ciphertext.”  This ciphertext is not readable on its own, but requires conversion back to its original form.  Conversion of the information is done with the use of a “key”, which is a special type of data that specifies how the information is to be transformed.  As long as this key is only known to people who you want to have access to the information, it can be considered secure.  Encrypting information makes it unreadable in transit, and helps to ensure only the intended recipient has access to it.

What are my wireless encryption options?

One of the early methods of encryption for wireless networks was known as Wired Equivalent Privacy, or WEP.  The problem with WEP was that it didn’t live up to the name.  There was a weakness in the way the encryption was implemented, and it wasn’t long before anyone with Google and a little techie know-how could connect to your network.  Initially this process could take a long time, but through certain techniques this process has been reduced to minutes, making WEP ineffective at protecting wireless transmissions for the last decade.

Due to the weakness inherent in WEP, the trade association responsible for certifying WiFi products came up with a new standard known as Wireless Protected Access, or WPA.  WPA was intended as an intermediate measure to secure wireless networks until the more secure WPA2 was finalized in 2004.  While much more secure than WEP, WPA and WPA2 still suffer from vulnerability to “brute-force” attacks which rely on repeatedly guessing different passwords until a match is found.  WPA/WPA2 do not have the weakness that existed with WEP, and as of today are still considered secure as long as certain best-practices are followed when implementing networks to mitigate the vulnerability to brute-force attacks.

With WPA/WPA2, there are two options for implementation.  The first is known as “personal” mode, where all devices on the network share the exact same key.  This would be like having a regular door lock to your office, with a single key and lots of copies for all your employees.  The problem here is that if one key is stolen, you have to re-key the lock and give out new keys.  This is okay as long as you only have a handful of the same key to replace, but if you have more devices it can become a hassle to change your entire network this way.  The second method of implementation is known as “enterprise” mode, and requires an additional piece of equipment that stores credentials.  Every device on the network is given its own key to connect instead of sharing a single key.  This would be like having a keypad lock on your office, and each employee having their own combination.  If one combination is compromised or an employee leaves the company, it is a much easier task to simply invalidate the combination and give the user a new one.

Recommendation

Our recommendation is to implement WPA2, and if you do not have a large quantity of wireless devices, to use it in “personal” mode.  This represents a high level of security with lower cost than enterprise mode.  If you find yourself having to reconfigure your network regularly due to theft of devices or employee turnover, however, then enterprise mode may be appropriate for you in order to simplify management of the network.  In addition, using a strong 13 character or longer value for the key will ensure that brute-force attacks are unlikely, and reading your data would take thousands of man-years.

Remember that regardless of the encryption methods in place, given enough time and resources, your data can eventually be compromised.  The intent of encryption, however, is to make the time and resources required unreasonably large.  This is what was meant by “adequate” security.  Does your attacker really have thousands of man-years to spend trying to crack open your encrypted data?

Why is my wireless network so slow?

Published Apr 22, 2013 by

Struggling to figure out why your WiFi connection feels sluggish compared to your hard-wired devices? Perhaps its not the fault of the technology, but an issue with configuration and utilization.

Perfect World, Worst Case

With wireless technology, every device operating in the same frequency shares the bandwidth available.  This includes not only the devices connected to your network,but ANY wireless devices operating in the same frequency range as yours, such as your neighbors’ WiFi.  Additionally, the rated speed of your wireless access point is not per-device, but a total shared rating. In a simplified perfect-world model, a WiFi router with ten devices utilizing their connection to capacity (worst case) is only capable of providing one tenth that speed per device. Remember those “walkie-talkies” you played with as a kid?  Only one of you could transmit your voice at a time.  The same concept applies here, which is why the bandwidth is shared.

Real World Problems

In the real world, given the situation above, your connection speed would be much less when taking into account the additional network traffic associated with maintaining the connection and correcting for errors in transmission. Errors are introduced when some bit of information that is sent doesn’t make it to its intended destination or is unreadable, either due to the distance involved or radio interference (noise) that may be interfering with the signal.  That microwave oven in the break room next to your office?  It operates in the same frequency range as your WiFi connection, making it a direct source of radio frequency (RF) interference.  Cordless phones within your business environment?  If they are of the 2.4GHz variety, they can be an additional source of noise.

So how you do fix it?

You can reduce interference by relocating things such as microwaves and cordless phones, or simply operate on a frequency with less wireless traffic.  The wireless frequency range used by WiFi is split up into channels, which are smaller divisions of the total frequency range.  You may have seen the channel setting in your wireless access point and left it set to “auto” or some other default channel value.  Each channel uses a particular part of the wireless spectrum, and if you can select the channel with the least noise and least amount of utilization, you should be able to maximize the use of your wireless bandwidth.  Software exists that allow you to take an accounting of the wireless networks in your area and provide some easy to read visuals, letting you know which channel would be most advantageous for your network.

Screenshot of inSSIDer, a tool used for analyzing wireless networks.  Note lots of overlap in the graph at the bottom, giving us a quick visualization of wireless network channel allocations.

Screenshot of inSSIDer for Home, one tool used for analyzing wireless networks. Note lots of overlap in the graph at the bottom, giving us a quick visualization of wireless network channel allocations and channels to avoid.

Once you’ve determined which channel is most appropriate, you can reconfigure your access point to use that channel.  While the “auto” setting on most routers purports to do this, it is typically inadequate for high traffic areas or anything more critical than home usage, and manually setting the channel is preferred.  In the screenshot above, you can see that there are lots of networks overlapping in the middle of the 2.4GHz band.  It is also important to note that while 2.4GHz is the most widely used frequency, equipment can also be purchased that operates in the 5GHz range with reduced interference.  However, this frequency is not without drawbacks, most notably reduced operating range and increased cost.  As such, 2.4GHz is usually the better choice overall.

Conclusion

Besides the unrealistic expectations of relocating all your office microwaves to the farthest end of the hall or forcing your company president to stop using the cordless phone in his office suite (both of which will not help your reputation), channel selection is the best option for maximizing wireless bandwidth availability.  It is one of the easiest fixes to make, but also one of the easiest to get wrong if ignored. Don’t forget, however, that a hardwired Ethernet connection will always win out in speed and security (which we’ll cover in a future post), and wireless connections should only be used if absolutely necessary.