Skip to content
3 strategies to protect your information system from internet scans (and resulting attacks)


3 strategies to protect your information system from internet scans (and resulting attacks)

In the previous two articles, we presented the issue of service exposure on the internet (Information System security: understanding the issue of exposure and 10 cyberattacks that exploited the principle of exposure), as well as 10 cyber attacks that exploited the principle of exposure. Today, we want to revisit three common strategies that companies use to address this problem. As remote access needs and teleworking have particularly increased in recent years, what are the common strategies that companies choose to prevent their exposed services from becoming a gateway for hackers?

Method 1: Protecting Exposure Points

One of the most common ways to prevent services from being accessed or attacked by anyone is to employ a strategy of protecting exposed points. Its philosophy suggests that exposing services is a necessary evil and tends not to modify the existing architecture, at least for the services themselves. Behind this strategy are all traffic analysis solutions, including:

  • Firewalls coupled with source IP address whitelists

The advantage of this technique is to prevent anyone whose address is not on the list from establishing a connection with the service and thus avoid any exploitation of vulnerabilities or the use of stolen credentials against the service. The real difficulty is in maintaining an up-to-date list of authorized IP addresses, which quickly becomes tedious when the information system has hundreds of users or mobility is commonplace.


© Flickr - mmatins
  • IDS, IPS, and WAF analysis probes

These solutions will study the traffic and try to determine if it is malicious. Whether they raise alerts, cut connections, or ban IP addresses that originate from access attempts, they do not prevent the initial connection to the protected application, which can sometimes allow attackers to study the mechanisms and blocking rules used to bypass them. The real advantage of these "intelligent" systems is to avoid security administrators from maintaining whitelists of authorized addresses, unlike the first method.

In addition to traffic analysis, there is also:

  • Strengthening authentication mechanisms

Adding MFA (Multi-Factor Authentication), a more robust password policy, or client certificate authentication are all methods that administrators use to prevent attackers from easily authenticating with exposed applications. Although some of these recommendations can effectively combat the use of stolen credentials or dictionary/brute force attacks, attackers are sometimes able to bypass them by using methods such as MITM Devious phishing method bypasses MFA using remote access software or by exploiting vulnerabilities that directly affect the application and render any authentication mechanism ineffective.

Method 2: Reducing the Attack Surface

This second method involves modifying the architecture of the information system and reconsidering the number of services accessible from a public network interface. The idea is to move services that were initially exposed on the internet, or should be made accessible remotely, inside the information system, within one or more private sub-networks, and to allow access through a single point of exposure, usually a VPN gateway or a bastion.

Thus, the attack surface can be significantly reduced if the information system has a multitude of services that need to be accessed remotely. This strategy is widely used and has the advantage of ensuring that only users who have access to the internal network, or a part of it, are able to establish a connection with the services.


© Flickr - Daniel Aleksandersen

However, it can sometimes be vulnerable. In addition to critical vulnerabilities that can affect exposed VPN gateways, the use of VPNs can sometimes be tricky because it involves sharing access to a part of the IS network with "trusted" users. Thus, malicious users (or in general, an internal threat) can gain access to network services they should not have access to. The use of VPN should therefore be combined with a robust IS architecture and segmentation, while keeping in mind that VPN access opens up a range of additional attacks that are not usually feasible from outside the network (MITM attacks, ARP poisoning, IP address spoofing, TCP session hijacking, flow eavesdropping, facilitated DoS attacks, etc.).

In our previous article on 10 cyberattacks that exploited the principle of exposure, four of them used an exposed VPN to succeed, either through the exploitation of critical vulnerabilities or the use of stolen credentials.

Method 3: Transfer the exposure

This third and final method involves getting rid of the responsibility for exposing services by going through a third-party exposure. This includes, among others:

  • Using CDN (Content Delivery Network) services combined with a whitelist of IP addresses to prevent bypassing the protection service.

The most well-known is undoubtedly Cloudflare, which filters entries all the way to the final service, analyzes traffic, and reduces the risk of attacks. Thus, the service is exposed through the Cloudflare systems and inherits the protection mechanisms operated by the third-party exposure.

  • Using certain services managed by cloud providers.

For example, the use of remote bastions such as Azure bastions, accessible via the Azure portal, transfers the responsibility for exposing the service to Microsoft and allows SSH and RDP connections to target servers without having to expose them directly.

SWI SolarWinds Corporation

© Flickr - Wonderlane
  • ZTNA (Zero-Trust Network Access) solutions

These new ways of providing remote access usually involve establishing connections between the services to be made accessible and a third-party infrastructure that then handles user authentication for those who have the appropriate rights.

The possibilities offered by these exposure transfer methods generally have two drawbacks:

  • Either the services perpetuate their exposure (with an additional layer of protection, as is the case with CDNs). Therefore, they remain detectable and attackable.

  • Or the services are no longer exposed. However, the infrastructure of the third-party is, and this poses two problems: on the one hand, the possible compromise of the third-party's infrastructure, through its exposed services, allows access to the services that are published there. On the other hand, it is necessary to trust the third-party, which can naturally establish connections with the published services. (And even spy on exchanges because the third-party is positioned as an intermediary).

Whether it's about protecting one's own exposure points, reducing the attack surface, or transferring responsibility to a third party, each strategy has its advantages and disadvantages.

However, new ways of designing remote access over the internet, such as ZTNA, are attractive in many ways. Could we not get rid of the drawbacks of such solutions (such as the need to trust the third-party exposure provider!) and keep the benefits?

To stay informed about the latest articles and the progress of the startup, don't forget to sign up for the newsletter: