PCI SSC unveils two new validation programs for software vendors and assessors

The PCI Security Standards Council (PCI SSC) announced two new validation programs for use by payment software vendors to demonstrate that both their development practices and their payment software products address overall software security resiliency to protect payment data.

Under the Secure Software Lifecycle (Secure SLC) and Secure Software Programs, Software Security Framework Assessors will evaluate vendors and their payment software products against the PCI Secure SLC and Secure Software Standards. PCI SSC will list Secure SLC Qualified Vendors and Validated Payment Software on the PCI SSC website as a resource for merchants.

PCI SSC is introducing these programs as part of the PCI Software Security Framework (SSF), a collection of standards and programs for the secure design, development and maintenance of existing and future payment software.

The SSF expands beyond the scope of the Payment Application Data Security Standard (PA-DSS) and will replace PA-DSS, its program and List of Validated Payment Applications when PA-DSS is retired in 2022. During the interim period, the PA-DSS and SSF Programs will run in parallel, with the PA-DSS Program continuing to operate as it does now.

Secure SLC Program and Secure Software Program documentation is now available on the PCI SSC website. This includes Program Guides and FAQs, with information on the vendor and payment software validation process, and Qualification Requirements for SSF Assessors.

PCI SSC plans to start accepting applications from assessors by the end of 2019. Training will be available in early 2020, first for Payment Application Qualified Security Assessors (PA-QSA) and QSAs, and then for new assessors. Once SSF Assessors are in place, vendors can begin the validation process for their software lifecycle practices and payment software.

Secure SLC Program

  • Validation to the Secure SLC Standard illustrates that the software vendor has mature secure software lifecycle management practices in place to ensure its payment software is designed and developed to protect payment transactions and data, minimize vulnerabilities, and defend against attacks.
  • Upon successful evaluation by a Secure SLC Assessor, validated software vendors will be recognized on the PCI SSC List of Secure SLC Qualified Vendors.
  • Secure SLC Qualified Vendors will be able to self-attest to delta changes for any of their products that are listed as Validated Payment Software under the Secure Software Program.

Secure Software Program

  • Validation to the Secure Software Standard illustrates that the payment software product is designed, engineered, developed, and maintained in a manner that protects payment transactions and data, minimizes vulnerabilities, and defends against attacks.
  • Initially, this program is specific to payment software products that store, process, or transmit clear-text account data, and are commercially available and developed by the vendor for sale to multiple organizations. As new modules are added to the Secure Software Standard to address other software types, use cases and technologies, the program scope will expand to support them.
  • Upon successful evaluation by a Secure Software Assessor, validated payment software will be recognized on the PCI SSC List of Validated Payment Software, which will replace the current List of PA-DSS Validated Payment Applications when PA-DSS is retired in October 2022. Until then, PCI SSC will continue to maintain the PA-DSS Program and list, which includes honoring existing validation expiration dates and accepting new PA-DSS submissions until June 2021.

“These programs work together with the PCI Secure SLC and Secure Software Standards to help vendors address the security of both their development practices and their payment software products.

“We’re pleased to have the Secure SLC and Secure Software Programs documentation available now as the initial step towards providing the industry with validated listings of trusted payment software vendors and products under the PCI Software Security Framework,” said PCI SSC Chief Operating Officer Mauro Lance.

“In the meantime, PCI SSC recognizes that transitioning from PA-DSS to the Software Security Framework will take time, and we want to reassure PA-DSS vendors, PA-QSAs and users of PA-DSS validated payment applications that the PA-DSS Program remains open and fully supported until October 2022, with no changes to how existing PA-DSS validated applications are handled.”

Read the Full Article here: >Help Net Security – News

Alphabet’s cybersecurity company Chronicle will join Google Cloud

Alphabet’s cybersecurity company Chronicle announced today that it’s joining Google and will become part of Google Cloud. The cybersecurity company launched in January 2018, and it released its first commercial product, Backstory, in March. In a blog post, Chronicle CEO and co-founder Stephen Gillett said Google Cloud’s cybersecurity tools and Chronicle’s Backstory and VirusTotal are complementary and will be leveraged together.

Chronicle got its start as a project inside X, Alphabet’s "moonshot factory," and it quickly spun into a standalone company. When Chronicle introduced Backstory this spring, the company compared it to Google Photos for cybersecurity. Users dump in data from various security products, and Backstory organizes the alerts and scans for legitimate threats.

It’s not unusual for Alphabet to fold its "Other Bets" into Google, as we saw when it integrated Nest’s hardware team last year. But Alphabet usually does so when companies are more mature and have proven their ability to make money. Alphabet’s decision to move Chronicle to Google Cloud could be taken as a vote of confidence in the platform. It could also speak to Google Cloud’s need for additional cybersecurity tools. Chronicle expects the integration to be completed sometime this fall, and it’s getting started on "accelerated product integrations" immediately.

Via: CNBC

Source: Chronicle

Read the Full Article here: >Engadget

AWS Security Hub aggregates security alerts and conducts continuous compliance checks

AWS Security Hub gives customers a central place to manage security and compliance across an AWS environment. It aggregates, organizes, and prioritizes security alerts – called findings – from AWS services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, and from a large and growing list of AWS Partner Network (APN) solutions.

AWS Security Hub

Customers can also run automated, continuous compliance checks based on industry standards and best practices, helping to identify specific accounts and resources that require attention. AWS Security Hub brings all of this information together in one place, providing a comprehensive view of a customer’s overall security and compliance status visually summarized on integrated dashboards with actionable graphs and tables.

There are no upfront commitments required to use AWS Security Hub, and customers pay only for the compliance checks performed and security findings ingested, with no charge for the first 10,000 security finding events each month.

Enterprises today use a broad array of AWS and third-party tools to secure their environments. These tools are effective but they also generate many findings – all viewable in different consoles and dashboards. Many customers use a patchwork set of custom-built solutions to manage and monitor compliance across distributed accounts and workloads.

To understand their overall security and compliance state, customers must either manually pivot between all these tools or invest in developing complex systems to aggregate and analyze the findings. This makes it challenging for security teams to centralize their security findings, prioritize the events that matter most, and ensure that accounts and workloads are operating in a compliant manner.

With AWS Security Hub, customers can quickly see their entire AWS security and compliance state in one place. AWS Security Hub collects and aggregates findings from the security services running in a customer’s environment, such as threat detection findings from Amazon GuardDuty, vulnerability scan results from Amazon Inspector, sensitive data identifications from Amazon Macie, and findings generated by a wide portfolio of security tools from APN partners.

The service then correlates findings across providers to prioritize the most important information, highlight trends, and identify resources that may require attention. Customers can also continuously monitor their environment with automated configuration and compliance checks based on industry standards and best practices, such as Center for Internet Security (CIS) AWS Foundations Benchmark.

If these checks identify any accounts or resources that deviate from a best practice, AWS Security Hub flags the problem and recommends remediation steps. AWS Security Hub gives security teams the visibility they need to prioritize work and improve their security and compliance state by centralizing their most important information in one easy-to-manage place.

“AWS Security Hub is the glue that connects what AWS and our security partners do to help customers manage and reduce risk,” said Dan Plastina, Vice President for External Security Services at AWS. “By combining automated compliance checks, the aggregation of findings from more than 30 different AWS and partner sources, and partner-enabled response and remediation workflows, AWS Security Hub gives customers a simple way to unify management of their security and compliance.”

AWS Security Hub ingests data from different sources using a standard findings format, eliminating the need for time-consuming data conversion efforts. Amazon CloudWatch and AWS Lambda integrations allow customers to execute automated remediation actions based on specific types of findings. Customers can also integrate AWS Security Hub with their automation workflows and third-party tools like ticketing, chat, and Security Information and Event Management (SIEM) systems to quickly take action on issues.

Leading providers, including Alert Logic, Armor, Atlassian, Barracuda, Check Point (CloudGuard Dome9 and CloudGuard IaaS), Cloud Custodian, CrowdStrike, CyberArk, F5, GuardiCore, IBM, McAfee, PagerDuty, Palo Alto Networks (Demisto, RedLock, and VM-Series), Qualys, Rapid7 (VMInsight and InsightConnect), ServiceNow, Slack, Splunk (Splunk Enterprise and Splunk Phantom), Sophos, Sumo Logic, Symantec, Tenable, Turbot, and Twistlock have built integrations with AWS Security Hub, with many new integrations to be added regularly.

Customers can try AWS Security Hub at no additional charge with a 30-day free trial. AWS Security Hub is available today in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (Sao Paulo), with additional regions coming soon.

Read the Full Article here: >Help Net Security – News

BloodHound – Hacking Active Directory Trust Relationships

BloodHound is for hacking active directory trust relationships and it uses graph theory to reveal the hidden and often unintended relationships within an Active Directory environment.

BloodHound - Hacking Active Directory Trust Relationships

Attackers can use BloodHound to easily identify highly complex attack paths that would otherwise be impossible to quickly identify. Defenders can use it to identify and eliminate those same attack paths. Both blue and red teams can use BloodHound to easily gain a deeper understanding of privilege relationships in an Active Directory environment.

It is a single page JavaScript web application, built on top of Linkurious, compiled with Electron, with a Neo4j database fed by a PowerShell ingestor.

BloodHound Hacking Active Directory Options

Enumeration Options

  • CollectionMethod – The collection method to use. This parameter accepts a comma separated list of values. Has the following potential values (Default: Default):
    • Default – Performs group membership collection, domain trust collection, local admin collection, and session collection
    • Group – Performs group membership collection
    • LocalAdmin – Performs local admin collection
    • RDP – Performs Remote Desktop Users collection
    • DCOM – Performs Distributed COM Users collection
    • GPOLocalGroup – Performs local admin collection using Group Policy Objects
    • Session – Performs session collection
    • ComputerOnly – Performs local admin, RDP, DCOM and session collection
    • LoggedOn – Performs privileged session collection (requires admin rights on target systems)
    • Trusts – Performs domain trust enumeration
    • ACL – Performs collection of ACLs
    • Container – Performs collection of Containers
    • ObjectProps – Collects object properties such as LastLogon and DisplayName
    • DcOnly – Performs collection using LDAP only. Includes Group, Trusts, ACL, ObjectProps, Container, and GPOLocalGroup.
    • All – Performs all Collection Methods except GPOLocalGroup
  • SearchForest – Search all the domains in the forest instead of just your current one
  • Domain – Search a particular domain. Uses your current domain if null (Default: null)
  • Stealth – Performs stealth collection methods. All stealth options are single threaded.
  • SkipGCDeconfliction – Skip Global Catalog deconfliction during session enumeration. This can speed up enumeration, but will result in possible inaccuracies in data.
  • ExcludeDc – Excludes domain controllers from enumeration (avoids Microsoft ATA flags 🙂 )
  • ComputerFile – Specify a file to load computer names/IPs from
  • OU – Specify which OU to enumerate

Connection Options

  • DomainController – Specify which Domain Controller to connect to (Default: null)
  • LdapPort – Specify what port LDAP lives on (Default: 0)
  • SecureLdap – Connect to AD using Secure LDAP instead of regular LDAP. Will connect to port 636 by default.
  • IgnoreLdapCert – Ignores LDAP SSL certificate. Use if there’s a self-signed certificate for example
  • LDAPUser – Username to connect to LDAP with. Requires the LDAPPass parameter as well (Default: null)
  • LDAPPass – Password for the user to connect to LDAP with. Requires the LDAPUser parameter as well (Default: null)
  • DisableKerbSigning – Disables LDAP encryption. Not recommended.

Performance Options

  • Threads – Specify the number of threads to use (Default: 10)
  • PingTimeout – Specifies the timeout for ping requests in milliseconds (Default: 250)
  • SkipPing – Instructs Sharphound to skip ping requests to see if systems are up
  • LoopDelay – The number of seconds in between session loops (Default: 300)
  • MaxLoopTime – The amount of time to continue session looping. Format is 0d0h0m0s. Null will loop for two hours. (Default: 2h)
  • Throttle – Adds a delay after each request to a computer. Value is in milliseconds (Default: 0)
  • Jitter – Adds a percentage jitter to throttle. (Default: 0)

Output Options

  • JSONFolder – Folder in which to store JSON files (Default: .)
  • JSONPrefix – Prefix to add to your JSON files (Default: “”)
  • NoZip – Don’t compress JSON files to the zip file. Leaves JSON files on disk. (Default: false)
  • EncryptZip – Add a randomly generated password to the zip file.
  • ZipFileName – Specify the name of the zip file
  • RandomFilenames – Randomize output file names
  • PrettyJson – Outputs JSON with indentation on multiple lines to improve readability. Tradeoff is increased file size.

Cache Options

  • CacheFile – Filename for the Sharphound cache. (Default: BloodHound.bin)
  • NoSaveCache – Don’t save the cache file to disk. Without this flag, BloodHound.bin will be dropped to disk
  • Invalidate – Invalidate the cache file and build a new cache

Misc Options

  • StatusInterval – Interval to display progress during enumeration in milliseconds (Default: 30000)
  • Verbose – Enables verbose output

You can download BloodHound here:

Linux x64 – BloodHound-linux-x64.zip
Windows x64 – BloodHound-win32-x64.zip
Source – BloodHound-2.1.0.zip

Or read more here.

Read the Full Article here: >Darknet – The Darkside

New Instart Web App and API Protection platform provide app protection from the origin to the browser

Instart, the leader in web application performance and security services, announced the Instart Web App and API Protection (WAAP) platform to deliver the most comprehensive protection against attacks on the application origin, including the APIs, the edge, and the browser.

This platform provides customers with a single cloud-based platform, powered by a single rules engine, and a unified threat intelligence system, to defend against application vulnerabilities, sophisticated bots, and browser-based attacks.

Web app and API exploits are now one of the leading vectors for data breaches, lost revenue, and brand damage. Organizations with a large web presence are battling more than just malicious traffic aiming to bring down their apps.

They now face sophisticated bots working to hurt their brand, privacy regulations that penalize them for compromised customer information, and browser-based threats that are targeting consumers through third-party code.

Each of these threats require distinctly different protection, each with its own detection and remediation methods in order to mitigate the ever-evolving attack types.

“While modern web apps bring immeasurable benefits to consumers, they also add significant complexity to IT organizations looking to secure them and protect their customers,” said Sumit Dhawan, CEO of Instart.

“Today, organizations are deploying multiple solutions with inconsistent rule sets as a way to defend against the multitude of threats attacking their online properties. Ultimately, this leads to security gaps, as intelligence is not shared between solutions. The only way to secure the full web app is to implement a cloud-scale solution with visibility and control over the entire application delivery path, including the origin, the edge, and the endpoint.”

As more applications move to the cloud, traditional appliance-based security technologies no longer protect these apps from all of the emerging threats.

According to Gartner, “By 2023, more than 30% of public-facing web applications will be protected by cloud web application and API protection (WAAP) services that combine DDoS protection, bot mitigation, API protection and web application firewalls (WAFs). This is an increase from fewer than 10% today.”

Instart’s cloud services for application security are already protecting the world’s largest brands from malicious threats to their web apps. Now, the Instart WAAP platform will bring together the company’s Web Security, Bot Management, and Tag Control products as well as its threat intelligence and expert security services to deliver a single, self-service solution with a unified management framework, powerful rules engine, and experienced team of experts.

With the combined solution, security threats can be detected and enforced at the origin, to the edge, and in the browser in order to fully protect a brand’s online property. The Instart WAAP platform includes:

  • Instart Web Security, which prevents sophisticated cyber attacks like distributed denial of service, cross-site scripting, SQL injection, and more using its web application firewall, DDoS protection capabilities, and powerful security rules.
  • Instart Bot Management, which detects and analyzes bot intent and blocks attacks from sophisticated malicious bots to prevent fraud and protect your brand.
  • Instart Tag Control, which gives you complete control of the JavaScript running in your customers’ browsers, including third-party tags, that can impact security, customer privacy, and the overall reliability of your website.
  • Instart Threat Intelligence, which combines multiple automated techniques, such as honeypots and third-party threat feeds to detect various cyberattacks and applies cross-customer learnings to automatically update rules.
  • Instart Managed Security Services, which provides a team of proactive security experts to help customers with implementation, rule creation, and incident response.

“Our customers are consistently telling us that their traditional appliance-based approach to web app and API security is no longer sufficient as the attack vendors expand and they incorporate more cloud services into their customer-facing web apps,” said Mitch Parker, Chief Customer Officer at Instart.

“Instart WAAP is the only platform to combine an intelligent, globally distributed cloud service with a browser virtualization layer for complete visibility and protection against both known application threats as well as emerging threats, such as advanced automated Bots and attacks from 3rd party services integrated into a web experience.”

Read the Full Article here: >Help Net Security – News

Shared Assessments unveils new Third Party Risk Management Framework

The Shared Assessments Program, the member-driven leader in third party risk assurance, announced a new Third Party Risk Management (TPRM) Framework designed to help organizations of all sizes effectively build, improve and execute best practices in today’s fast changing third party risk environment.

The first two modules, the Framework Introduction and a module focused on Risk Management Basics, are available to members on the Shared Assessments website.

As the practice of Third Party Risk Management has evolved, it has become increasingly evident that a fully developed TPRM framework could provide valuable assistance to organizations working to improve outsourcing oversight processes.

Shared Assessments has addressed the need for more detailed guidance by creating the Program’s TPRM Framework, which was developed with the collective intelligence of the Shared Assessments’ membership, a global community of experienced third party risk management practitioners in a broad array of industries.

Framework content is designed to be useful for board members, C-level executives and both beginning and advanced practitioners.

“There has been a significant increase in third party-related vulnerabilities in recent years, which has in turn resulted in increased demand for Shared Assessments Program resources, so the development of the TPRM Framework is needed now more than ever,” said Shared Assessments Chairman and CEO Catherine A. Allen.

“Increasing third party risks, together with new and changing regulatory mandates, require a new approach for providing the knowledge and practical skills necessary to help organizations more effectively manage third party risk. The new TPRM Framework represents a critical and effective step forward to help organizations move toward best risk management practices.”

TPRM has emerged as an important practice area within organizational risk management programs where annual benchmarking research indicates only 40 percent of all organizations have fully mature TPRM programs (The Santa Fe Group, Shared Assessments Program and Protiviti, Inc., 2019). The TPRM Framework encompasses all aspects of operational risk, including information security.

Gary Roboff, Senior Advisor at The Santa Fe Group, and the lead on the development of the Framework, noted, “The TPRM Framework is designed to provide guidance for organizations seeking to develop, optimize and manage Third Party Risk best practices.

The Framework also provides guidance about how to implement meaningful incremental improvements in TPRM practice maturity in organizations where resources may be constrained. Resource allocation is a significant obstacle for almost every organization in the current environment.”

Third Party Risk Management basics module

For practitioners, TPRM Risk Basics introduces the importance of a robust program governance and tactics to drive a strong organization-wide risk culture to earn senior management approvals for resources. Additionally, TPRM Risk Basics features a short primer that examines concepts including:

  • Inherent and residual risk
  • Risk appetite statements and frameworks
  • Risk tolerance metrics and other foundational elements
  • Program prerequisites and process factors to be considered when building an organization’s TPRM program, including factors relevant to making a decision about whether or not to outsource a specific business function or activity

Read the Full Article here: >Help Net Security – News

Kaloom launches flowEye, an AI-driven real-time in-band network telemetry and analytics solution

Kaloom, an emerging leader in the automated data center networking software market, announced flowEye, an AI-driven real-time in-band network telemetry (INT) and analytics solution. flowEye enables data center managers to achieve higher performance and lower OPEX via more powerful monitoring, analytics and troubleshooting capabilities.

Traditional network telemetry and analytics solutions were not built for today’s virtual architectures. Packets now travel through virtual system infrastructure including controllers, routers, gateways, security and other elements that may each be housed on different hardware.

Existing methods such as traffic sampling are ill equipped to provide real-time visibility into the many types of data traversing the virtual landscape and the number of hops that data takes as it travels.

Kaloom’s flowEye traces actual packet routes as they travel through the data center, providing insights to where they’ve been and what impacts throughput among both virtual and physical elements.

Kaloom’s fully integrated flowEye enables customers to build “self-driving” data center networks and to build closed loop systems that encompass orchestration, analytics and self-healing remediation for better operation and lower OPEX costs.

The need for separate boxes for monitoring and packet brokering is removed, and CAPEX is reduced by at least 2-3x. Data centers can now automatically correct anomalies in traffic handling, fix network breaks, and more effectively scale up or down network resources, thereby improving overall management.

Kaloom’s INT collects and reports about network state using in-band network telemetry data (i.e., metadata) to a backend analytics engine for detailed analysis. INT is performed in the data plane without impacting the control plane. Packets contain header fields that are interpreted as “telemetry instructions” by network devices.

Specific, customized data can be collected from multiple packets in real time based on programmable criteria and sampling rates selected by customers. Kaloom’s INT methodology provides greater knowledge about the state of the network in real time versus traditional methodologies, in particular network troubleshooting and diagnostics.

Traditionally, telemetry has been performed using synthetic or statistical sampling and packet probing protocols such as ICMP echo that provide very little knowledge about the state of the network.

Programmable ASIC-based switches provide more granular insight regarding packets and data flow through the network in real time. Telemetry data can include but is not limited to, data path, queue occupancy, and latency experienced by packets.

Aggregating telemetry data generates detailed reports about network state, and critical telemetry reports on packet drop or queue alert events.

Kaloom’s INT enables advanced packet tracing capabilities in real time (“traceroute”) of the network route. The networking nodes along the path use the INT instructions to tell devices what state to collect and write that information into the packet as it transits the network.

Using this info provides better granularity in real time and facilitates root cause analysis, so network problems can be pinpointed before they occur, and corrective actions can be taken.

Competitive products often require the addition of equipment in the network to measure the traffic, but in doing so, add to total network cost, and total latency of packets over the network. As well, they do not provide a real-time view of data traffic as they employ synthetic, sampled or statistical data collection methods.

With Kaloom’s flowEye solution, data that is collected can be further analyzed by the analytics engine and presented in an advanced dashboard for spatial view of the full network hop-by-hop; geo view to view critical issues and where they are physically/geographically located; temporal view to drill down into specific issues in real time, with sub-second precision and metrics; and cards view to see details of SLAs and KPIs for parameters such as latency, jitter, loss, throughput, number of packets, number of flows, fragmentation, link level throughput, etc.

Kaloom’s analytics suite enables customers to visualize, find and fix infrastructure issues across multi-data center environments and correlate applications to specific network flows of actual traffic, not sampled or synthetic flows.

It monitors every packet and flow while visualizing the results with 100ms precision. The dashboard shows how point, segment, and transactional metrics are performed and displays results in less than 5 seconds.

“Our automated real-time monitoring, analytics and troubleshooting capabilities will change the way data centers are currently managed. Until now, customers have been using expensive additional equipment that adds to the cost and latency of monitoring the network state.

“Also, they have been doing so on sampled or synthetic traffic, not on the actual traffic, and not even close to real time. Kaloom has taken a unique approach and can now provide an industry-first, real-time visibility of the packets and flows for the actual traffic, thus guaranteeing optimum operational visibility for data centers,” said Suresh Krishnan, chief technology officer at Kaloom.

“In-band Network Telemetry, analytics and automation are well understood for their potential to improve data center operations and enhance application deliveries,” said Paul Parker-Johnson, chief analyst at ACG Research.

“Kaloom’s flowEye innovations significantly increase the granularity of insight that can be applied to pursuing those goals. And the fact that its functions are programmable into functioning data center resources means the outputs from its analytics are available faster than existing solutions, and at lower overall cost, since no additional equipment needs to be installed. Kaloom’s perspective on analytics and automation in the data center environment is both innovative and forward-looking.”

Read the Full Article here: >Help Net Security – News

Cynet Free Visibility Experience – Unmatched Insight into IT Assets and Activities

Real-time visibility into IT assets and activities introduces speed and efficiency to many critical productivity and security tasks organizations are struggling with—from conventional asset inventory reporting to proactive elimination of exposed attack surfaces.

However, gaining such visibility is often highly resource consuming and entails manual integration of various feeds.

Cynet is now offering end-users and service providers

free access to its end-to-end visibility capabilities

.

The offering consists of 14 days access to the Cynet 360 platform, during which users can gain full visibility into their IT environment—host configurations, installed software, user account activities, password hygiene, and network traffic.

“When we built the Cynet 360 platform we identified a critical need for a single-source-of-truth interface where you get all the knowledge regarding what exists in the environment and what activities take place there,” said Eyal Gruner, Cynet founder, and CEO.

“Both the operational and security implications of having all this data available in a click of a button are dramatic.”

In today’s IT security landscape, there are two groups in which the lack of visibility plays a role.

The first one is found within organizations that acknowledge the necessity of certain tasks – common examples can be maintaining a patched application, applying change management procedure, and tracking software. Performing these without the ability to retrieve the required data easily is hard and error-prone.

The second is security service providers that cater to a multitude of customers. This group is subject to the same pains of the first one but on a much higher scale.

Cynet 360 visibility capabilities can boost the efficiency of security monitoring workflows, enabling MSSPs/MSPs to address their customer needs with significantly less effort better.

With Cynet 360, operators can easily perform and automate tasks such as:

  • Check if there are systems and apps with missing security patches.
  • Know the accurate number of all hosts, their operating system version, and installed software.
  • Customize and create asset inventory reports.
  • Discover risky user accounts and network connections.
Cynet Vulnerability Assessment
Cynet Vulnerability Assessment
Cynet Network Topology View
Cynet Network Topology View
Cynet Activity Context View
Cynet Activity Context View
Cynet Installed Software Display
Cynet Installed Software Display

The

Cynet Free Visibility

offering targets IT/security decision makers who acknowledge that the lack of visibility acts for them as an inhibitor in accomplishing critical tasks, whether as end-users or as service providers.

Using this offering, they can experiment with Cynet 360’s end-to-end visibility capabilities by applying them to either optimize existing tasks or perform new ones.

“It’s a rather worn-out phrase: you can’t secure what you don’t know,” says Gruner, ‘but it’s true all the same, and we are able to boost organizations in that direction. Available, high-res knowledge of your environment is the equivalent of a good opening move in chess – it narrows down the risks you face and enables you to focus on what really matters.”

Read the Full Article here: >The Hacker News [ THN ]