On 2011/09/05, I started my first job as "Apprentice IT Technician" at a small Managed Services Provider which are famous for trials by fire so, in that time, I've managed to work my way up to "Senior IT / Network Engineer" and learn an astounding amount.
Many lessons have proved to be very useful, both professionally and personally. Some examples are how to predict the most likely outcomes, how to determine the most efficient course of action, how to deal with pressure and/or large workloads, how to communicate effectively, that it's almost always better to do something as soon as you get the chance, how to see things from multiple perspectives, etc.
Other lessons have proved to be... frustrating. As you will have guessed from the title, one example is that there are quite a few misconceptions which can result in erroneous, damaging conclusions and should be addressed.
First, it may be important to state that:
- Most people's interaction with IT is limited to help desk and/or PC repair so most of the misconceptions will be related to these areas. In reality, IT is incredibly complicated with many branches and niches, just like other fields (mechanical engineering, accountancy, surgery, etc).
- The information in this post is not exhaustive.
- There are elements of truth to these misconceptions.
Misconception #1: All we do is Google
Search engines are a very useful tool in any field but particularly in IT because:
- The rate of evolution is so enormous that it's impossible to proactively keep up with everything.
- The field of Information Technology and the technology of search engines are inherently linked.
However, while indispensable for learning and research, search engines should be used to enhance core skills, not be the core skill (AKA "google-fu"). There are simply too many scenarios where they fail.
What if the answer doesn't exist?
This happens quite often but, when you think about it, it's not really surprising - any given IT system has a dizzying number of factors which are bound to yield unique, unexpected results sooner or later.
For real-life examples of this, check just my StackExchange post history (select "Posts"). You may notice that I actually end up answering most of my own questions..
What if the answer is wrong?
The simple fact of the matter is that content on the web is posted by people and people often are wrong and give bad advice.
When researching a subject or a scenario, it's important to have a basic, solid understanding; synthesise information; and consider the implications of given advice.
For example, an FTPS connection doesn't work correctly so you research the problem and find that the general consensus / recommendation is to use FTP instead which you confirm succeeds. You could (and many people do) leave it at that. After all, it's working now, right? Yes, but now the credentials and data are being transmitted in plaintext, ripe for theft and/or tampering. You've fixed the problem with the wrong solution and made it worse without knowing it.
What if you can't find the answer?
Sometimes, a problem's root cause is elusive because it's intermittent and/or the symptoms are commonplace, both of which makes diagnostics and research difficult.
My experience with Windows' Explorer hanging when renaming a folder is a very good example of this because the solution was documented online but only after extensive diagnostics did I acquire unique, research-critical information and find it.
What if the user / client is there with you?
I'm confident in saying that, in professional services, every single person inevitably and fairly regularly encounters a scenario that requires new knowledge and/or skills but the process is abstracted away from the client. Why? Because, understandably, most people want their clients to perceive the only thing that's relevant - the carefully considered end result.
So, on the flip side, what if I went to a professional, explained a problem, and they obviously just researched the scenario and offered one of the first solutions that they came up with? It's likely to be a "quick fix" so I wouldn't be confident.
What if you're offline?
Admittedly, in our always-online world, it's rare to be unable to access the Internet but it does happen.
Misconception #2: All we do is say "turn it off and on again"
My blame for this one goes to The IT Crowd and bad IT people who are lazy or miscommunicate.
There are very legitimate reasons that IT professionals recommend that a device is restarted:
- It's quicker.
When forcing a new GPO to apply, forcing the DHCP client to obtain IP configuration, restarting a Windows service, etc, a user can sometimes be talked through the steps depending on their permissions or skill level but, generally, rebooting is significantly quicker for everyone involved, especially at scale.
If it's an uncommon problem, no resolution is in sight, and it needs to be fixed quickly then why not just try it and if it recurs then perform Root Cause Analysis? It's just more efficient.
- It's required.
Some processes such as fully applying GPOs, generating a new Kerberos ticket, or completing a software or device driver installation can only be done by restarting or re-signing in to the device because that's just how it works.
- It's all that can be done.
How else are you going to clear a memory leak or restore a completely unresponsive (frozen) system? It may not be possible even with real-time debugging.
Misconception #3: It's easy and anyone can do it
(This misconception is usually associated with the phrase "Why don't you just...")
Computers have become ubiquitous and incredibly intuitive. Inevitably, some users become very familiar with common tasks and problems in different applications and even different Operating Systems - power users. However, familiarity does not necessarily equate understanding and, as such, can be deceitful.
IT is easy... to get wrong.
Let's look at some common, basic scenarios with hidden, potentially serious ramifications.
Anyone can setup a PC. You just take it out of the box, turn it on, enter your details, install your apps, and start using it. Right? Yes and no. That's the basic setup but rarely does anyone give it any further thought.
By default and almost always, data is stored locally (that is, physically on the storage drive inside the computer). What's wrong with that? Well, the data will be susceptible to:
- Loss. Without endpoint backup (or, arguably, file sync), if the storage drive fails, data is accidentally deleted, the computer is hit by ransomware, etc then it'll likely be unfeasible to recover the data.
- Theft. Without full disk encryption, if the computer is even left unattended then the data (files, emails, passwords, web browser history, etc) can be easily extracted and/or modified.
All users want to be able to install their own apps but, in most cases, doing so requires that the user account is granted local administrative permissions which is overkill to say the least and fraught with danger because, for example, they allow the users or malware to:
- Read and write to other local user profiles and, therefore, other users' data.
- Debug memory and, therefore, extract other users' or the system's passwords, password hashes, private keys, etc.
- Install software and, therefore, gain remote access, steal credentials and escalate permissions, leak data, become vulnerable either via malicious code or inevitably unpatched software, cause legal licensing trouble, etc.
- Reconfigure the system and, therefore, create unnecessary inconsistencies and problems.
- Circumvent security / protection.
- Exploit vulnerabilities.
- Create undocumented local administrative user accounts.
Defense in Depth and the Principle of Least Privilege exist for a reason.
Everyone is familiar with the unavoidable bane of passwords but most don't know that they are seldom used correctly and, in many ways, are no longer sufficient. This may actually be the biggest pitfall in IT and probably deserves more than a subsection.
Anyway, let's look at some example problems and resolutions:
- Only Single Factor Authentication (SFA) is used.
In authentication, there are three factors: knowledge (something that the user knows such as a password), possession (something that the user has such as a smartphone), and inherence (something that is integral to the user themselves such as a fingerprint).
Combining two or more of these factors is known as Two / Multi Factor Authentication and is substantially more secure. So, if TFA / MFA was ubiquitous then pretty much all drawbacks of passwords are negated.
- Passwords are weak and/or re-used.
I understand why this is the case. Password strength criteria is getting more and more complicated and passwords are required by more and more systems so it's not realistic to expect users to generate and remember countless truly strong passwords.
However, with the frequency of data breaches ever increasing (refer to https://haveibeenpwned.com), so must strong and unique passwords.
Password managers have largely solved this problem.
- Users share their passwords or generic user accounts are used.
It's natural for humans to follow the path of least resistance so you could argue that it's natural for users to share passwords or user accounts so that tasks can be completed more quickly.
However, what most users don't realise is that once a password is shared with anyone (family, IT support, managers, etc) for whatever reason then, put simply, all bets are off - the audit trail is lost and can be used against any party, access to data is unintentionally granted, the same or a similar password could be used to gain access to further systems, etc.
Delegated access largely solves these problems.
Passwords are stored insecurely.
People intentionally document passwords in Word files, digital notes, physical sticky notes, etc and unintentionally document passwords in emails, chat histories, tickets, etc but all of these are quite insecure forms of storage and if something is insecurely stored then it can be stolen.
Password managers and services such as PasswordPusher have largely solved these problems.
The number of times that I've seen user accounts not protected by MFA with credentials that I could have guessed (password "Companyname123!", for example) freely handed out (to web designers, for example) which grant access to private data, authoritative name servers, and the domain name registrar (you could destroy an organisation with access to these) is incredible.
As far as users and a scary number of sysadmins are concerned, as long an email system is up-and-running and has some form of spam filter then there's nothing to worry about. This couldn't be further from the truth.
Be they entry- or enterprise-level, all email services should but often don't have the following setup:
- Connections secured by publicly trusted certificates to mitigate against man-in-the-middle attacks (credential and data theft and/or modification in transit), user fatigue / desensitisation to cyber security, etc.
- Sender Policy Framework (SPF) to mitigate against spoofing attacks at the SMTP / 5321.MailFrom level.
- Domain-based Message Authentication, Reporting & Conformance (DMARC) to mitigate against spoofing attacks at the MIME / 5322.From level, among other things.
- DomainKeys Identified Mail (DKIM) to mitigate against emails being modified in transit.
- Auditing so that anomalies can be alerted, to assist with incident management / response, etc.
- Possibly an advanced email filtering service to mitigate against more advanced threats such as display name-level spoofing attacks, spam from ephemeral domain names, etc.
- If possible, TFA / MFA to mitigate against credential theft.
Even Office 365 (a giant, multinational, enterprise-level email service with a super user-friendly setup) never mentions #3, #4, #5, or #7 in the setup guide, even though it supports all of them.
As great as the Cloud can be, people almost always sign up to services with little consideration for the consequences.
Is the service free? As a rule of thumb, if you're not paying for a service then your data is being sold in some form.
Does the data contain Personally Identifiable Information (PII)? Is the provider certified? Where is the data is physically stored, processed, etc? Is the data encrypted and, if so, how? Is the data shared with any third parties and, if so, repeat all questions for them too? Even for some of the largest cloud service providers (Office 365, Dropbox Business, etc), the answers can be surprising and will determine whether your entire organisation is compliant with data protection law such as Europe's General Data Protection Regulation (GDPR) and the UK's Data Protection Bill, both of which will carry heavy maximum fines of either 4 % of global turnover or £17 million, whichever is higher.
Misconception #4: We know how it works because it runs on a computer
Users tend to assume that technical support understand anything and everything about a computer. In reality, technical support teams are usually made up of people who didn't design the products, they're just trained to simply setup and resolve problems with them to an extent.
For example, IT support teams generally support the IT infrastructure (PCs, servers, networks, email, etc), application / service support teams specifically support their software (Google with Search, Gmail, Maps, Android, etc), and OEM support teams specifically support their hardware (Apple with Mac, iPhone, iPad, AirPods, etc)/
Products are designed by people and people are inconsistent so, obviously, we all have to defer to those who understand something better every now and again.
Let's draw a parallel. Computers and roads are both physical things that allow a large variety of things to operate on them. Would you expect any given person in the automobile industry to understand anything and everything about the road and what runs on it - the operating parameters of asphalt, cement, dirt, and brick; the optimal shape of a car for aerodynamics; the chemistry of an EV's battery; the maximum RPM of a motorbike tyre; the programming of self-driving cars; etc. No, of course not. People specialise and no one can know everything.