My Blog List

Big Data meets Complex Event Processing: AccelOps delivers a better architecture to attack the data center monitoring and analytics problem

The latest BriefingsDirect podcast discussion centers on how new data and analysis approaches are significantly improving IT operations monitoring, as well as providing stronger security.

The conversation examines how AccelOps has developed technology that correlates events with relevant data across IT systems, so that operators can gain much better insights faster, and then learn as they go to better predict future problems before they emerge. That’s because advances in big data analytics and complex events processing (CEP) can come together to provide deep and real-time, pattern-based insights into large-scale IT operations.

Here to explain how these new solutions can drive better IT monitoring and remediation response — and keep those critical systems performing at their best — is Mahesh Kumar, Vice President of Marketing at AccelOps. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: AccelOps is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Is there a fundamental change in how we approach the data that’s coming from IT systems in order to get a better monitoring and analysis capability?

Kumar: The data has to be analyzed in real-time. By real-time I mean in streaming mode before the data hits the disk. You need to be able to analyze it and make decisions. That’s actually a very efficient way of analyzing information. Because you avoid a lot of data sync issues and duplicate data, you can react immediately in real time to remediate systems or provide very early warnings in terms of what is going wrong.

The challenges in doing this streaming-mode analysis are scale and speed. The traditional approaches with pure relational databases alone are not equipped to analyze data in this manner. You need new thinking and new approaches to tackle this analysis problem.

Gardner: Also for issues of security, offeners are trying different types of attacks. So this needs to be in real-time as well?

Kumar: You might be familiar with advanced persistent threats (APTs). These are attacks where the attacker tries their best to be invisible. These are not the brute-force attacks that we have witnessed in the past. Attackers may hijack an account or gain access to a server, and then over time, stealthily, be able to collect or capture the information that they are after.

These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time, and there are very, very weak signals that need to be interpreted, and there is a time element of what else is happening at that time. This too calls for streaming-mode analysis.

If you notice, for example, someone accessing a server, a database administrator accessing a server for which they have an admin account, it gives you a certain amount of feedback around that activity. But if on the other hand, you learn that a user is accessing a database server for which they don’t have the right level of privileges, it may be a red flag.

You need to be able to connect this red flag that you identify in one instance with the same user trying to do other activity in different kinds of systems. And you need to do that over long periods of time in order to defend yourself against APTs.

Gardner: It’s always been difficult to gain accurate analysis of large-scale IT operations, but it seems that this is getting more difficult. Why?

Kumar: If you look at trends, there are on average about 10 virtual machines (VMs) to a physical server. Predictions are that this is going to increase to about 50 to 1, maybe highe

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...

All time Popular Posts





Dg3