Oritinally posted - May, 25, 2007
You can spend a lot of money on security technology. You can spend a lot of time developing security policies and procedures. And you can have security awareness training sessions once a year for staff. You can do all of these things and still have major security incidents.
Why is security so hard to get security right, even when you do all the things the standards, manuals, books and courses say you are supposed to do?
Every technology has limitations. You can’t depend on technology to fill in the gaps. It is one of the most static components of security. Once you have installed it and configured it, you can’t expect its behaviour to change to suit your changing organization or its changing environment. You must understand the technology’s limitations, and understand what it does and doesn’t do for you in meeting your security objectives.
For example, an Intrusion Detection System (IDS) or an Intrusion Prevention System (IPS) do many things in similar ways, and some things differently. They can both look at network traffic and identify patterns that usually represent threats to normal operations. IDS will send an alert to the Security Operations people in order for them to initiate a response, while IPS will often be able to take immediate action to stop the threat (while also sending alerts or writing log entries). So, it would seem logical to assume that IPS is a better technology to deploy, since it is able to take action on its own. But what if the type of network traffic your organization normally encounters in daily operations is something that would cause a lot of false positives (traffic incorrectly identified as a threat) in an IDS or IPS system? Having an automated system take action when it is incorrectly identifying a threat could cause havoc in your critical business systems. So, sometimes you may need to have a human in the loop to look at an alert before deciding to initiate a response. Somebody has to decide on what the right technology approach is.
In the case of policies and procedures, you may think you are ahead of the game by using another organization’s governance documentation, saving a lot of time and money. But what happens when your main operational Web application has 100,000 daily login failures, and the cloned documentation calls for logging of every login failure with time, date and username information, but no IP addresses, and no login failure limits before lockout? You may have reams of logs, but no idea where attack on a thousand usernames a day may be coming from, or how to stop it without shutting down the whole system. Somebody has to review the policies regularly to make sure they are appropriate for your business systems.
Finally, you can train people annually on security awareness, as most security standards require. But you can still find daily violations of security policies that should be preventable in just about every work environment. I was at an appoinment to see a medical professional recently, and was led to the examination room and asked to wait. The office had a desk with two files less than an arm’s length away from my chair, and a few more that were within easy reach. On the outside cover of each file, without opening it, I could easily read the name, address and sex on each of them. Had I wanted to learn more about them I could have just opened the files during the 10 minutes while I waited to be seen. Suppose someone had this information and waited a few days, then called the patient, saying “Hi, I’m calling from Dr. Rosenhorse’s office…” Sounds like a pretty credible source, and who knows what could be gained from the patient at that point? Was this a sign of poorly written policy, poorly defined procedures, poor facilities to support the procedures, or poor handling by the humans involved? Whether the procedure existed or not, it should be common sense for staff to take the precautions of at least keeping personal files out of sight or out of reach of waiting patients.
The reason I used the three different dimensions of techology, procedures and people in these examples was to illustrate how each aspect of security depends on the others. Not only that, but it shows that the most difficult part of getting security to work properly in an organization is in getting the more flexible human side to adjust to fill in the gaps created by the more rigid parts. Humans have a critical part in the success of every type of safeguard. Technology and procedures don’t change very often, but humans behave differently every day, unless they are given knowledge, motivation and the means to behave consistently. Even then, it is not a 100% guarantee. The good part is that humans can think on their feet, and they can report when something is not looking quite right.
The bottom line is that the humans involved in choosing and implementing the technologies, drafting and enforcing the policies and procedures and executing their daily jobs must all understand the value of their roles and the importance of the security safeguards they work with every day. The safeguards have to be appropriate and must be applied consistently in order to protect the value of each individual’s contributions to the organization.
Therefore, I submit that the problem is almost always with the people. Do they understand the organization’s objectives, its values, the value of the systems they interact with and their role in protecting it? Fixing these things at every level of the organization can often do more to improve security than adding newer technology or stricter policies. Make sure the human part of security is the top priority.
Finally, I don’t believe security awareness training on an annual basis is nearly enough to keep staff focussed on what the important things are in their jobs, and how to safeguard them.
The Streetwise Security Coach
Phone: 1-613-693-0997Email: email@example.com
To download my FREE Security Management Resource Guide now, and to receive my series of Streetwise Security Tips, as well as my Streetwise Security News and updates click HERE.
Originally posted - May 17, 2007
Just to ad a little bit of confusion to the question of “Who are the bad guys and who are the good guys?”, there has been discussion of how some dangerous links are able to get into the Google Adwords ads on the right side (and sometimes first few search results) of a Google search results page.
Didier Stevens did an interesting experiment to demonstrate how easy it was for a malicious site to get an ad in a high position, where it might infect people with inadequate security on their systems if they click on them.
This goes to show that it doesn’t matter who you think you can trust. I have the following advice:
Seriously, links are not always what they seem. You have to treat them as if you are hitch-hiking. You just can’t be too careful.
Originally posted - May 9, 2007
On occasion, I am struck with how unaware the management in some industries are of the number of risks they face in everyday situations. Take the hospitality industry. In between client meetings I sometimes look for a quiet, comfortable place to sit and do email or finish some work. Hotel lobbies are one of my favourites. I’m sure many business travellers would agree with me.
Within a 30 minute timeframe the other day in a hotel lobby, I made quick notes on every conversation the hotel staff around me were involved in while I was working. Given that I was sitting within earshot of the front desk, but 30 feet away, there were several revealing tidbits I was able to overhear.
Here are some of the types of information that was easily overheard from across the lobby:
The last names and room numbers of several guests checking in, and sometimes their company name or affiliation
I’m sure there are many other types of information I could have learned if I stayed longer.
While some of these things are not necessarily considered sensitive information, they make it easy for attackers to put together plausible scenarios that give them access to information and places they shouldn�t have. It struck me that the staff sometimes get so bored, they have nothing else to talk about except guest incidents and how they handled them. While it seems innocent enough, it is a fertile ground for social engineering, data gathering, identity theft.
What could be done? Two things I immediately thought of, but I�m sure there are more:
Let me know if you have any other ideas of ways to better manage this kind of risk in the hospitality industry.
Originally posted - May 3, 2007
I have been hearing the name Alan Calder from several sources lately. So, I ordered the book above (Here). It’s definitely worth having on the shelf, even if it does have a fairly high “price per page”.
I found the book to be packed with relevant references for everything from standards to market surveys. I marked it up pretty well inside, making notes to myself on how the information could be used. In particular, it spends a lot of time on how to draw the linkage between IT Governance and IT Security; primarily through the fact that Directors are tasked with managing the risk of an organization, which has much in common with the IT risks, especially since most organizations’ “intellectual capital” far outweighs its traditional counterpart based on “book values” of capital.
From Chapter 1: “Risk management at both the strategic and operational levels is a board responsibility, and is impossible without effective IT governance.”
In fact, because IT Governance itself implies that information is being gathered and processed about all aspects of an organization, there must be some protection of the confidentiality, integrity and availability of that information - therefore IT Security is a must for good governance, and the board should be involved… QED.
The bottom line is that it gives a good case for everyone to urge their Boards of Directors to make sure that IT Governance is on the Board’s agenda. After all, capital investment in IT is now over 50% of most companies’ capital budgets, and as an operating cost IT represents over 30% for most companies. Shouldn’t that get some oversight at the Board level?
Among other things I found valuable in the book was the practical approach to putting an IT Governance framework in place. Instead of a critical path plan, it has a set of useful concepts that can be implemented as needed, allowing you to move over time to a more responsible system of managing IT.
As for the low points, the only thing I could call out is the fact that a disproportionately high number of references and examples come from the UK, where Calder is based. However, it still has plenty of relevant information for us in North America, and the UK/European comparisons are certainly not irrelevant to any global organizations. In reality, it just opened my eyes to how much work needs to be done to align standards for governance globally.
Originally posted - April 24, 2007
I’ve noticed on several occasions entering office buildings where they have a visitor’s log, it can make for some interesting reading as you sign your name. Visitor access logs are one of the fundamental audit controls in IT and physical security. Who was there, when, representing whom? But when competitors of one another visit a mutual client, it can provide competitive advantages one way or another, or it can be used to gain information about what brand of firewalls or antivirus safeguards are used by an organization.
I’m sometimes surprised at the fact that some highly secure organizations have never taken the initiative to allow visitors to sign in on a medium that doesn’t reveal who came in a few minutes or hours earlier. Maybe I’m just paranoid, but it is something to keep in mind. At least keep it to one sheet instead of a binder of the entire month’s visitors. That could be a significant risk for leakage of information useful in planning an attack.
On the lighter side, I have seen the ploy used intentionally to add a sense of urgency to competitive vendors in their final negotiation stages with a customer. They arrive at the customer site to see in the visitor logs that their chief rival was in a few hours earlier with their big guns to make a last minute concession or proposal. Were they really there, or was it just a tactic to make the vendor sweat?
So, just remember that access history can often be viewed by all visitors, unless you manage them frequently.
Originally posted - April 17, 2007
Did you ever wonder why businesses put up silly signs that say “If we do not offer you a receipt, your purchase is free” at the checkout counter? There’s a very good reason for this, and many other seemingly useless signs. Have you noticed the sign that says “There is never more than $50 in the safe”, which tells thieves that it’s not likely to be worth robbing the convenience store? It’s a lot cheaper than trying to implement technology to prevent every possible attack with “Preventative Safeguards”. These signs, and other types of warnings, are called “Deterrent Safeguards”.
The reason for the sign at the checkout counter is actually to deter store clerks from doing some sleight of hand and pocketing the cash from a transaction without ringing it in. So, in a very clever…AND CHEAP… move, some store owner decided to change the economic model slightly. This makes all the difference. If the clerk knows the customer has an incentive to ask for the receipt, they are much less likely to try to cheat the system.
This is so effective, some people might call it a preventative safeguard, but it doesn’t actually prevent the theft by an employee. It just makes it much less likely. If the customer doesn’t notice the sign, or has seen it so many times that they forget to ask for the receipt, or they just feel silly asking for a receipt for a pack of gum… the clerk can get away with it. But it does stop most clerks from going there on larger items.
If you look around you will see many signs that state rules or warnings that are so obvious you have to wonder why anybody bothers. These warnings can also be effective for defending the seriousness of a security program in court. If a judge notes that there are no warning signs around an open pit, the mining company can be deemed liable for not taking action to warn of the danger. Similarly, when you see a login screen that has a bunch of legal mumbo-jumbo on it, the business is saying to the courts “Look, I’ve told the guy he shouldn’t abuse my system. So if he does, I have a right to go after him, even if he’s my own employee.”
Laws can also provide deterrent by stating consequences such as fines or prison penalties for offenders.
This brings me to a sad and timely issue that arose for all of us yesterday. After the Virginia Tech shootings, the first thing on many people’s minds was probably related to the gun laws in the USA; some for tightening them, and some for defending the status quo. I understand the freedom philosophy, and that it sometimes is in conflict with the public interest. There are many conflicts like this in the security field. They can go both ways. But the point I want to make here is that for a relatively small cost, there is almost always something you can do to deter the majority of people.
I don’t want to get into a long political debate, but I happen to like the gun laws in Canada better than in the USA. It is much harder for an idiot with a grudge to walk around with a firearm in Canada looking for revenge; not that the laws had much effect last fall in Montreal at Dawson College. It could have been just as bad as Virginia Tech in that situation. But when something is illegal, the attitudes that people have around would-be criminals bragging about their gun collection change to become more of a deterrent. If anyone had noticed Kimveer Gill’s VampireFreaks page with his posed photos holding guns and knives, he might well have had the police at his door before he started shooting. In this case, the laws and public attitudes as deterrents could have been successful. Unfortunately, it didn’t happen that way. My sympathies are with all the victims of these crimes.
There will always be people who will argue until the cows come home that stronger gun laws will not prevent the Virginia Techs or the Columbines from happening. However, this logic, while strictly true, is rarely valid. Deterrent safeguards are so much more cost effective in many cases that not implementating them as a first step in a security program (in addition to prevention, detection and response safeguards) usually results in continued escalating losses. It is sad that more has not been done to avoid the suffering and loss on such a large scale.
While those issues will be discussed ad nauseum in news and forums, I’d like to take this back into the business context. Deterrent safeguards can often provide a greater return than trying to prevent or detect attacks technologically. But they are usually in a much more “human” context, and often deal with the psychology of potential attackers, including insiders. So, there is no black and white, right or wrong, and you certainly can’t depend on only using deterrents. There has to be a balance.
The bottom line is, with an intelligently designed configuration, and properly worded communications, you can save a lot of money, and maybe sometimes lives, with deterrent safeguards.
Originally posted - April 10, 2007
Did you ever wonder why so many IT projects end up in trouble? Apparently, the statistics available and highlighted by IT blogger, Ann All at IT Business Edge in her article “IT Governance (Not) on Board” shows that most Boards of Directors do not have adequate IT visibility on their agendas, and many also lack the expertise to address IT if it was on the agenda.
The data Ann All refers to comes from surveys by Deloitte and also by researcher Steve Andriole. OK, so Boards are not involved in IT projects. Is that a big surprise? Probably not. But if you think about it, as Andriole says, “Our dependence on IT has never been greater.” Wouldn’t that make it a strategic issue to be addressed by the Board of Directors? Why are there so few IT-aware executives on most businesses’ Boards? As IT Governance author, Alan Calder says, most of the board members are old fogeys who have their assistants print out their emails for them.
It almost goes without saying that companies need more visibility for Security at the Board level as well. At this level IT Governance and Security are as closely related as Parenting and Household Rules. Imagine if parents cared nothing about the amount of time spent sitting in front of screens, and the types of video games their kids were playing in their rooms. What kind of kids would we expect to end up with?
Let’s go beyond Alan Calder’s call for more IT-aware executives, and demand some Security-aware executives on the Boards of our companies. And while we’re at it, why not put all capital IT projects over $25K and IT Operations groups with annual capital and expense budgets over $100K on the agenda, along with their risk management plans and statuses?
These dollar value thresholds are arbitrary examples. They should depend on the size of the business, but anything significant should have visibility to a competent board to make sure shareholder value is being maintained. There’s no excuse for having an IT-ignorant Board of Directors any more.
Originally posted - March 13, 2007
Like it or not, the sad reality is that the insider threat exists in virtually all organizations. Given the right set of circumstances, almost anyone can yield to temptation. In my view it takes a combination of Policies, Awareness, Risk Analysis, Preventative and Detective Safeguards, Audits and Sanctions, as a minimum to be able to say you have done any kind of due diligence in securing your organization’s information. Take any of the recent daily news stories (as they start to become non-News), such as the Texas baby kidnapping, or the Tampa airline firearms smuggling…
The insider threat comes in many different scenarios, some of which may not seem to be insider-related. For example,
These are just a few examples. Without a complete set of security policies and implementation there are just too many scenarios that you might not think of. A good counter to the insider threat involves a methodical sensitivity or risk analysis that identifies what information, assets or business systems can be compromised, and how much it would impact the organization, its partners, or its customers.
The combination of policy, awareness and other safeguards provide layers that make it more difficult for an insider threat to succeed without being caught. Most of all, if employees or anyone with access knows that the chances are slim, and the consequences of being caught are high, the risk becomes much more manageable.
In a strange kind of twist, some people think that their procedures or safeguards are so obscure, nobody would think they could get away with an insider attack. That’s called Security by Obscurity, and it is rarely a good idea on its own. However, there is a balance needed between letting people know the safeguards are there (deterrent safeguards), and keeping the details vague enough that people don’t know where the weakest points are.
There is a saying that says “Trust, but verify”. We all want to trust our employees, but they must know that they are accountable, and it is in the organization’s best interests, and those of its clients, to put the right safeguards in place to monitor and counter insider threats. It shouldn’t be a privacy debate. The company’s assets are its own, and it has an obligation to protect them.
Originally posted - March 28, 2007
As Mike Rothman (the Pragmatic CSO) wrote yesterday on “Incrementally Getting to Secure Code“, there are a lot of Security Managers (and/or CSO’s) who have inherited a big mess of legacy systems and infrastructure. You can’t hope to put a program in place instantaneously, especially not when you don’t have a budget for new systems. Some systems may have big holes that won’t be patched for a while. Where do you start? One person I know was parachuted into a security management position, and the first time he read the policies and documentation from his predecessor, he says it meant absolutely nothing to him. So, its been a struggle, but he’s doing very well.
So, here are some things to consider: your environment, your skills, and your methodology. Mike covers them all well in his book “The Pragmatic CSO”. While I confess that I only recently bought his book, and haven’t finished reading it yet, I am hoping there is a section on “Triage for the new CSO”.
One thing I think you can do to start getting to know your environment is to spend a few minutes each day reading your system logs. If you have a lot of systems, pick the ones you think are most critical to the business operations.
You should be looking for login events by privileged accounts such as ROOT, SYSADMIN, SU, ADMINISTRATOR, and any personnel or usernames names whom you think might have high privileges and/or knowledge of your systems. Most consoles will log this stuff by default. Login failures are a good thing to check for; especially if there are a lot of them. Someone might have found out the ROOT password, and it might be used maliciously. Or it might just be used for some routine task in a benign way because “that’s the way it was always done”.
Either way, these accounts must be controlled and all activities logged. Then check for status messages related to accesses. If there are databases, they should be logging major operations such as updates. The amount of data could get large, but it is important to know who is updating the data, and that they are the ones authorized to do so. In a bit of serendipity, Mike has a link today for “what to log“. Very timely. If you don’t see any of these types of log messages, find a way to set logging levels for these types of system events, and check them daily until you have a view to what’s happening behind the scenes. If anything looks suspicious, save the logs and find someone who knows what the data means. If you don’t know if you should trust them, contact someone in the security community who can do forensics and is well-respected.
This feels a bit like sitting in a control tower, talking a student pilot in control of a 747 down to the ground. But the idea is to find out information about anything that might be a big threat today. In the meantime, I do recommend using a systematic approach to address your whole business’s security management. I like Mike’s P-CSO approach, and I will probably add more comments on it over time. But for now, if you aren’t sure what to do while waiting for his book to arrive, read your systems’ logs.
In an effort to consolidate all my online works, I'm copying all of my past information security articles from www.securityviews.com (started in January 2007) to The Streetwise Security Zone, under this column. So, you'll be able to find my articles while browsing the SWSZ site.
Some of the articles are dated, and I've tried to update some as I copy them over. But they are still subject to your comments at any time.
Hopefully within the next week or two, I will have all the articles copied over, and will redirect the URL to this site.
I won't be sending all of the articles via this email feed, but will let you know when the transfer is completed.
The next job will then be to transfer the Honey Stick Project website content to the SWSZ.
Originally posted - February 6, 2007
One aspect of IT Security that always seems lacking is in the treatment of Security Awareness, both within organizations and with individual citizens. The book that led me to create this blog, What No One Ever Tells You About Blogging and Podcasting : Real-Life Advice from 101 People Who Successfully Leverage the Power of the Blogosphere by Ted Demopoulos, had a lot of great ideas that can apply to creating a blog on IT Security. One valuable use of blogs can be to enhance Security Awareness within an organization.
It’s a constant struggle for Security Management people to keep people thinking about why and how they need to protect their valuable assets. A blog with an email feed can allow managers to provide frequent updates on the latest threats, security briefing sessions and new policies. Many organizations, as pointed out in Ted’s book, have started to use blogs internally for internal corporate communications. At the very least, security articles should be added to the internal corporate newsletters.
My feeling (unsupported by any methodically administered survey) is that a Security Manager who can get Corporate Marketing to add a weekly security blurb into the internal newsletter will be recognized 4 out of 5 times in the hall by the top executives - especially if you put your picture beside the headlines!
If you need ideas on Security Awareness topics, please let me know.
Originally posted - February 3, 2007
In the last of 6 episodes in the podcast series at “Security Round Table“, there was a great discussion on Instant Messaging security issues. One of the most interesting aspects of the discussion was on whether enterprises would try to completely lock down IM facilities so that people couldn’t use it for personal “unproductive” chatting. The concensus in the panel seemed to be that it would not really be possible, given that, unlike most other technologies that are so expensive that they originate in the enterprise and migrate to the public masses (i.e. cell phones, pagers, etc.), IM started out in the public domain, and is migrating into enterprises. Basically, “It’s much harder to deny a technology to someone who never had it in the first place, than to take it away once they have it.”
There are, of course, security issues aplenty with IM tools, such as Availability (consumption of corporate network bandwidth), Confidentiality (leaking of corporate Intellectual Property), and Integrity (known vulnerabilities that provide vectors for malicious code). On a personal note (privacy mostly), I used to work in a place where my boss expected people to use AOL Instant Messanger, and to be logged in at all times when they were working…so he could keep an eye on who was at their desk! I have yet to come across anyone else who has seen it used in that way. I saw it as an invasion of privacy, since being logged in and online did not necessarily correspond with a worker’s actively productive times. Mysteriously, my laptop never liked AIM, and often crashed. So, happily, I wasn’t able to keep it installed. When he asked me why I wasn’t using AIM I told him that it crashed my machine. I never really knew if he believed me, or if it affected his opinion of how productive I was. In any event, I think it’s a strange way to keep a leash on your team members.
Anybody care to comment?
Originally posted - February 23, 2007
In the last while, I haven’t heard much about Data Integrity in the news. I guess that’s a good thing. Nothing to worry about, right? I doubt it. What is one of the worst threat scenarios you could imagine in your enterprise? Identity theft, credit card fraud, data compromise? Maybe. But what if someone is able to gain access to a key database server in your operations zone? Has anyone considered what could happen if the culprit was sympathetic with your competitor, or had an axe to grind with your organization’s management?
In the old days, hackers broke in and defaced Web sites. Well, in a rare bit of nostalgia lately, they were at it again when they decided to hit the Canadian Nuclear Safety Commission Web site. But nowadays it’s usually for financial gain (selling information) or serious revenge. Most of the time, they do whatever they can to cover up their tracks. This makes it hard to tell if they’ve even touched your servers.
Most often I hear of people doing Threat and Risk Assessments or Incident Investigations, and they are concentrating on what the cost is, in relation to the resale value of identity information, or loss of credibility. These are important, for sure. But I rarely hear people considering what the cost could be to an organization if their operational information in live databases is altered maliciously. How long would it take to discover? How long would it take (and how many analysts) to identify the exact roll-back point? Of course, everyone is doing database checkpoints and journalling (are you?), so it shouldn’t really be a problem to recover systems to the point of compromise, reload all the subsequent transactions to the current date? And to top it off, what if the perpetrator was a malicious internal administrator who knows what safeguards are easiest to get around without detection? Could they plant some malicious code on a server that could continue to corrupt data long after they are gone?
OK, so you might say that these are really far out scenarios. And like Bruce Schneier’s “Best Terror Movie Plot” contest, it could raise some objections that we are just giving people ideas here. That’s really underestimating the imagination of the bad guys. But that’s a whole other discussion.
The real key is to incorporate layers in your security safeguards that not only try to prevent and detect at the perimeter, but inside more secure zones, as well. And it’s not just protecting against access for theft, you must consider ways to prevent and detect unauthorized modification of operational data that your organization depends on. Imagine having buy/sell limits changed for thousands of investment clients; or flight times and fuel quantities for an airline with thousands of aircraft moving around the world… There are features in most database systems, and there are file system integrity tools that can let you know when unscheduled or unauthorized changes are made.
The key tools for protecting data integrity are access control and detection of unauthorized changes. These risks may not be as spectacular as losing a laptop with half a million identities on it. But, I think the impact to an organization can be just as devastating if all the layers aren’t protected against less obvious attacks against data integrity.
Originally posted - January 21, 2007
A term I heard software security guru Gary McGraw use when talking about how you can’t just do a static analysis of an application’s code and expect it to find all vulnerabilities.
That’s because vulnerabilities often creep into applications via poor architectures and designs. Unless analysis is done from the architectural level on down through source code scans and penetration testing, there are only limited types of vulnerabilities that can be found.
In an IT News story called “The Truth About Software Security“, a spinoff of Symantec named Veracode is offering a static analysis service to analyze compiled software code. They don’t analyze source code, just the machine code. It’s not that it’s a bad thing to do, but I expect a lot of companies will view this as a total replacement for many vital Application Security techniques that are sorely needed to bring the security of the average application up to a reasonably high assurance level.
What do you think? Will this result in a net “increase” or “decrease” in software security?
Originally posted - January 19, 2007
The recent announcement of a new debit machine fraud scheme shows that merchants need to take more physical security precautions. Store clerks were being distracted long enough for crooks to exchange the real machine with one wired to collect the card and PIN information. They come back a few days later and do the same thing to retrieve the machine.
If the store doesn’t physically secure the machine and have video surveillance, it can very easily miss this attack.
Originally posted - January 17, 2007
I have heard about the “USB Token Penetration Test Experiment” that is reported on the searchsecurity.techtarget.com site. It illustrates how risky USB tokens are, not just to enterprises, but to everyone.
Did you know that if a USB token has an “autorun” file on it, that file will execute automatically as soon as it is inserted.
That program will have all the rights of the user who is logged in at the time, and it could have access to every file on the system. This is one reason why many enterprises disable USB token access.
At the very least, you should always disable “autorun” for any CDs or removable drives, and if you must use a USB token, make sure you know that it’s trusted, or use your own.
Update: This post was the first inspiration for The Honey Stick Project (at http://www.honeystickproject.com) that led me to use specially configured USB memory sticks to simulate real malicious code threats, as a way of measuring the general public's IT security awareness. At this point, it has shown that over 50% of devices dropped get picked up and plugged into computers attached to the Internet. Most people think, "So What?". So, my mission has evolved into a security awareness education initiative that led to the creation of The Streetwise Security Zone at http://www.streetwise-security-zone.com (this site).
Originally posted - January 15, 2007
For a regular discussion on VOIP Security issues, check out the “Blue Box Podcast“, hosted by Dan York and Jonathan Zar.
Apparently, Canada is behind the curve on “breach notification” legislation. In the “Reality V2.0” blog, I came across this interesting note, which surprised me…
“The Canadian Internet Policy and Public Interest Clinic is requesting that changes be made to the Personal Information Protection and Electronic Documents Act (PIPEDA) to force businesses to inform those whose personal information may have been compromised as a result of a security breach. ”
Usually, Canadian laws tend to mirror US laws when it comes to security issues (passport laws aside). I would have thought we already had a law saying companies have to disclose to stakeholders when they have had a security breach. I guess I was wrong.
Update: As far as I know, the laws have not been updated yet in Canada. Let me know if you hear of any news.
The title above is the name of a book by Bruce Schneier, well known author and commentator on the field of security.
I just listened to a podcast of an interview with him on ITConversations.com which was quite interesting. Bruce has evolved from being a cryptography expert to a network security expert, now to a security generalist. He has some great insights and analogies for security problems. I haven’t read the book, but intend to.
Update: As of November 2008, I still haven't read the book. I do listen to Bruce Schneier's podcasts. I don't always agree with him, and I think he has some biases. But he does raise interesting issues that should be discussed.
Originally Posted - January 14, 2007.
My name is Scott Wright, and I’m a Security consultant with a wide range of experience in Security Management and Information Technology Management issues. My plan for this blog is to provide a way of sharing practical (or sometimes theoretical) ideas about risk management and security for executives, managers and staff of all types of business organizations.
Whether you are interested in corporate governance , security awareness, application development, social engineering or insider threats, I have views that can help make your organization better managed for security. I can also find many ways to help you more actively through coaching, seminars, training, keynotes, organizational development, team building, risk assessments and other tools.
I want to make this an interesting and interactive blog. I will be asking questions, and will post ideas and links for things I find to be useful for my Clients, and hopefully you and your associates.
As a first post, I have been trying to come up with a meaningful first link. I could link to my consulting business’ Web site, but that would be shameful and blatant. Besides you can find that link in the About Scott Wright page.
Let’s start with something that we should all keep an eye on, the CERT web page (not an acronym for Computer Emergency Response Team, but the CERT/CC was the first Computer Incident Response Team). They have lots of news and information on that security status of the Internet.
So, I hope your readings here are interesting and add value to your business in some form. I also hope you will post comments and links to relevant sites. See you in the next post.