Advanced Persistent Threats(APT)?

In this blog post we understand what is an APT and how it is different from a traditional targetted human-hacker attack.

Most people will immediately point to the “persistent” part of the definition as the key differentiator. The normal targeted attackers break in, look around, and immediately target the most valuable found assets. They figure that the faster they get in and out with the treasure, the more money and the less risk they face.

By contrast, APT attackers are there to stay as long as they can. The attackers aren’t trying to steal everything at once. Instead, they exploit dozens to hundreds of computers, logon accounts, and email users, searching for new data and ideas over an extended period of months and years.

Even the treasure taken by APTs is different. The traditional attacker seeks immediate financial gain. They will try to steal identities, transfer money to foreign bank accounts, and more. APT attackers, on the other hand, almost always take only information and leave money untouched. Their targets are corporate and product secrets.

APT often steals large amounts of information each week, collecting it at a centralized computer within the compromised network, before sending it all home in a single archive file (often a tar ball). Many networks run APT bots that collect every new folder, file, and email, then send it home. The victims have an online backup system that rivals what they could otherwise pay for with a legitimate company.

Worse yet, APTs are usually so ingrained into an environment that even if you know where they are, they can be difficult or impossible to move.

Google,Dupont, Walt Disney and the latest addition to this list -RSA Inc have all been hit by APTs

RSA Security Inc Hacked-How it Happened

RSA, the security division of EMC and producer of the SecurID systems used by countless corporations (and the Department of Defense), has been hacked. The company sent out messages to its clients and posted an open letter stating that it’s been the victim of an “advanced” attack that “resulted in certain information being extracted from RSA’s systems” — information “specifically related to RSA’s SecurID two-factor authentication products.” The copy of the letter can be found at this link-https://www.rsa.com/node.aspx?id=3872


The worry is that source code to the company’s SecurID two-factor authentication product was stolen, which would possibly allow hackers to reverse-engineer or otherwise break the system.


Initially, it released no details about how the attack was carried out. Now, RSA–which is a unit of storage giant EMC–has gone into some detail concerning how its systems were breached, in a blog post by Uri Rivner, whose title is Head of New Technologies, Identity Protection and Verification. It all started with phishing emails.


Over the course of two days, two groups of emails were sent to a small group of employees, none of them high profile, nor apparently especially senior. Though RSA doesn’t spell out who received them, the emails may well have gone to the human resources department or some other quiet corner of the company. The emails contained an Excel spreadsheet attachment entitled “2011 Recruitment Plans.” Naturally it was created to look just believable enough that one of the employees who received it fished it out of the spam folder to which it was initially directed and opened it. You can probably fill in most of the blanks from here.


The spreadsheet contained a Zero-day exploit that took advantage of a weakness in Adobe Flash, which has since been patched. Through that hole, attackers were able to install anything they wanted on the target machine. They chose a version of a program called Poison Ivy RAT, and in this case RAT stands for “remote administration tool,” a program that is used to control one computer from another in a different location.


Still unexplained at this point: What information was taken, and does it in any way affect the integrity of its own security products? When the attack was first disclosed, the company said that some information about its SecureID products was taken by the attackers. This has led to a lot of questions and speculation by security pros who naturally have to think about the worst-case scenario, and frankly, there are many for which the adjective “worst” would apply.


The big looming question is whether or not the attacker gained access to the seeds–the random keys embedded in each token–that are used to generate the constantly changing numeric codes that appear on the device’s display.


Evaluation of Anti-Virus Software-Some Commonly Used Criteria

Here is a list of commonly used evaluation criteria for anti-virus software

  • • Ability to produce new virus signatures quickly
  • • Dispersed/distributed manageability
  • • Unified client features
  • • Client transparency
  • • Support for all Windows OSes and Linux
  • • Web-based management console
  • • Company strength and overall AV strategy
  • • Ability to integrate with other solutions such as Cisco NAC
  • • Proactive notification on potential outbreaks and/or problems
  • • Ability to clean up after viruses and/or spyware have infected a system
  • • Ability to quickly prevent outbreaks while new virus signatures are not yet available

Each of the above criteria has been explained further

Ability to Produce New Virus Signatures Quickly

The period between when a virus is discovered “in the wild” and when a signature or pattern file is available for clients is extremely critical. The longer it takes to get and distribute new pattern files, the more likely it is to have clients getting infected.

Dispersed/Distributed Manageability

The ability to provide Unit Computing Specialists and/or departmental administrators access to manage their own clients was also an important feature. With the diversity in departmental IT policies, it is necessary to be able to give people the ability to set policies for their department differently than  what is defined at the global level. Furthermore, departments need the ability to provide customized reports on systems under their control to their management.

Unified Client Features

The ability for client software to provide antivirus, anti-spyware, SPAM filtering, and firewall support in a single package was very high on the list of requirements. Packaging all of these features together under a single client not only reduces desktop and system tray clutter but typically takes up fewer system resources in terms of CPU and memory.

Client Transparency

Another aspect that to consider is how the client itself performed while a system was under heavy usage. Real-time scanning and monitoring needed to be as unobtrustive as possible. This also meant that any error messages or warnings that popped up as viruses were found needed to be easy to understand and answer. It was very important that the client be as transparent and easy to use as possible to users.

Support for mulitple OSes

If there are a variety of operating systems is use it is important that any solution support the full range of Windows operating systems from Windows XP and2003 all the way back to Windows 98 and Windows 95. In addition, adding support for protecting the growing number of Linux desktops and servers may also be required.

Web-Based Management Console

Enterprise management tools needed to be web-based for ubiquitous access. Not all system administrators run Windows on their desktop, so use of a Windows client-based management system is not desired in our environment. Furthermore, the console needed to be able to provide granular control over systems being managed.

Company Strength / Overall AV Strategy

Another factor in selecting an antivirus solution is how strong the company itself was. Fiscally weak or unsound companies tend to get bought out by larger corporations who may then change the levels of service a product provides even during a contract.The availability of technical support for the anti-virus software is also relevant here. This particularly the case when using free anti-virus software.

Ability to Integrate with Other Solutions

Network security is another area of focus when selecting an antivirus solution. The ability of a solution to integrate with third party solutions such as Cisco’s Network Solution. It is therefore essential that anti-virus solution be able to integrate with the existing network infrastructure.

Proactive Notification of Potential Outbreaks and/or Problems

Limited human resources means that continuous monitoring of the system may not be possible. Therefore, it is critical that any solution be able to watch systems and automatically notify system administrators of possible outbreaks or issues on the network. The ability to email or page an administrator or administrators when there appears to be an anomaly on the network should be considered.

Ability to Clean Up after Viruses and/or Spyware

Obviously another factor that must be considered when evaluating antivirus solutions is how well the product is able to clean a system after an infection. If a solution simply detects a virus but doesn’t clean it up well, it doesn’t really save an administrator any time or effort. The solution should be able to successfully clean a majority of infections without having to rebuild the system.

Ability to Prevent Outbreaks Until New Virus Signatures Are Available

Many vendors have begun to discuss “zero-day” protection, but few actually do much about it. The ability to prevent an outbreak from occurring when there is no virus signature or pattern file available is extremely important. Hundreds of systems could potentially become infected in the time it takes a virus to be detected “in the wild” to the time a new pattern is available. A feature considered key was the ability for software to keep systems protected even though they were unable to detect the virus.

32-bit vs 64-bit Computing

The ‘data buss’ in any PC is used to move data around inside the computer. It connects memory to the rest of the system including the processor, which does all the thinking in your computer.

In a 32-bit computer, the width (or size) of the data buss is 32-bits wide. A 64-bit buss is twice as wide so the system can move twice as much data around. Being able to process more data means a faster system — but only for specific things. Normal office productivity and web surfing will show no advantages at all, whereas graphics processing and scientific calculations will go much faster.

So does it make sense to buy a 64-bit computer?

The most common problem with 64-bit is the general lack of stable software to run on these Ferrari of the computer world. The entire system has to be designed and built for the wider data buss, too, so the system will cost more. On the contrary, most 32-bit software will run on a 64-bit system, but that causes one to wonder why one spent the money in the first place. The prices of 32-bit computers are also a lot cheaper.

So why do we need 64-bit computers?

The answer: mostly businesses, universities, scientific groups, and government. If you produce videos, computer art, or develop programs, 64-bit systems will be helpful. But for the home user, 64-bit is currently a bit overkill.

An overview of the new Internet Protocol-IPv6

Since 1981, TCP/IP has been built on version 4 of the Internet Protocol. IPv4 was created when the giant, world-wide Internet we take for granted today was just a small experimental network. Considering how much the Internet has grown and changed over the course of two decades, IPv4 has done its job admirably. At the same time, it has been apparent for many years that certain limitations in this venerable protocol would hold back the future growth of both Internet size and services if not addressed. Continue reading “An overview of the new Internet Protocol-IPv6”

Difference between GPRS and WAP

In the old days of mobile internet, manufacturers and designers had a hard time providing internet access that is similar to computers  can achieve with modems and an HTML browser. GPRS (General Packet Radio Service) was the first technology that was successfully implemented into the 2G mobile phone systems to send and receive data between mobile phones and transmission towers. But by itself, GPRS does not provide the mechanisms for user to browse the internet. For that, WAP or the Wireless Application Protocol, was developed. You can think of WAP as a toned down version of HTML while GPRS is a toned down version of dial-up. To use another analogy, GPRS is the highway and WAP is the vehicle that uses the highway.


GPRS allowed the mobile phone companies to create a digital link where data can be effectively transmitted. But GPRS is not a technology that is exclusively used by WAP alone, other services also use GPRS to transmit data due to several advantages that it offers. One of these is SMS or commonly known as text messages, since using a GPRS connection for SMS results in a greater number of messages that can be sent within a given timeframe.

Theoretically, it is possible to use HTML browsers with mobile phones. But with the very slow GPRS and very weak processing capabilities of mobile phones at that time, HTML browsers are simply not practical. WAP browsers create a smaller and and leaner version of the internet to circumvent the small screens and weak processors of older phones. WML or the Wireless Markup Language was also created to create a separate and smaller set of keywords for use in WAP browsers.

Although these two were a good match before, the advancement of technologies have given better options. EDGE, which is an improvement over GPRS, allowed consumers to use WAP at higher speeds which gave a better internet experience. And the more recent 3G technologies means that there is very little need to stick to WML. Most of the more recent mobile phones can now support full HTML browsing, this gives an indication that WAP would soon be obsolete. Although GPRS is still very common today, it is only a matter of time before the 3G networks take over the areas that are only covered by GPRS as of today. By the time, GPRS would also be obsolete.

Summary:
1. GPRS is a method of connecting to your provider while WAP is the protocol that runs on top of GPRS
2. WAP is suited for GPRS only connections
3. There are also other services that use GPRS aside from WAP
4. WAP can also be used on EDGE and even on 3G connections

Log Management and Intelligence-LMI

Introduction

LMI is a governance enabler. Log data is no longer just the domain of technical personnel (traditionally used for trouble shooting). Log data is no longer just an IT asset and it is a corporate and business asset. It is used extensively by both management and external parties (auditor, forensic investigators) and hence has gained executive level visibility. In this post we look at the new approach to log management.

Continue reading “Log Management and Intelligence-LMI”

A list of major security breaches of 2009

As we begin a new year, I thought it would be a good time to reflect upon some major information security breaches of 2009.   The list of the organizations involved makes this list very  interesting. What makes this list even more interesting is the analysis of the  breach- which indicates that the incidents could have been averted by adopting some fundamental security best practices.

Continue reading “A list of major security breaches of 2009”