Question

"possible ssl heartbleed attempt" reported by Meraki firewall for Veeam port 902 traffic?


Userlevel 2

We're getting a fairly large number (~1K/week) of "OpenSSL SSLv3 large heartbeat response - possible ssl heartbleed attempt" IDS alerts and blocked connections from our Meraki firewall for ESXi-Veeam traffic.

The alerts look like this:

Time Type Source Network Destination Disposition Action Details
Mar 25 2:59:25 IDS Alert <ESXi IP>:902 <network name>

VeeamDRS
Meraki Network OS

  Blocked SERVER-OTHER

OpenSSL SSLv3 large heartbeat response - possible ssl heartbleed attempt

 

Most of the alerts for OpenSSLv3, with a few for TLSv1.1 and TLSv1. The following table summarizes alerts over a 2-week period.

Most prevalent threats

Threat Occurrences
SERVER-OTHER
OpenSSL SSLv3 large heartbeat response - possible ssl heartbleed attempt
1547
SERVER-OTHER
OpenSSL SSLv3 large heartbeat response - possible ssl heartbleed attempt
324
SERVER-OTHER
OpenSSL SSLv3 large heartbeat response - possible ssl heartbleed attempt
301
SERVER-OTHER
OpenSSL TLSv1.1 large heartbeat response - possible ssl heartbleed attempt
14
SERVER-OTHER
OpenSSL TLSv1 large heartbeat response - possible ssl heartbleed attempt
12
SERVER-OTHER
OpenSSL TLSv1 large heartbeat response - possible ssl heartbleed attempt
6

Per Veeam docs, this seems to be legitimate traffic. I am also unsure why SSLv3 or TLSv1 are in play as all flagged servers are fully patched and are not supposed to use anything but TLS 1.2 for encrypted connections.

VMware vSphere 7.0 docs say:

vSphere enables only TLS by default. TLS 1.0 and TLS 1.1 are disabled by default. Whether you do a fresh install, upgrade, or migration, vSphere disables TLS 1.0 and TLS 1.1.

Checking (per “Enable or Disable TLS Versions on ESXi Hosts”) the enabled/disabled TLS protocols on ESXi hosts shows the following being disabled:


UserVars.ESXiVPsDisabledProtocols sslv3,tlsv1,tlsv1.1

We're not seeing any Veeam backup or other failures despite seemingly blocked connections.
Any idea why Meraki reports (and supposedly blocks) this traffic? (We submitted a support request to them.)

  • Veeam B&R 11.0.1.1261 on Windows 2019 Server (fully patched) (only TLS 1.2 is enabled in “internet options” - no SSLv3.0 or other TLS versions)
  • ESXi 7.0u3
  • Meraki MX100 security appliance v.MX 18.107.2

Thank you!


15 comments

Userlevel 7
Badge +20

I think you are going to have to deal with Meraki on this one to see why it is doing this.

Userlevel 2

I think you are going to have to deal with Meraki on this one to see why it is doing this.

No contest, your honor. (Our operation is unlikely to be world’s only outfit with B&R and vSphere behind a Meraki firewall - so whatever comes out of this, might be useful to someone else here where it can be found vs. buried as a support ticket…)

Userlevel 7
Badge +20

I think you are going to have to deal with Meraki on this one to see why it is doing this.

No contest, your honor. (Our operation is unlikely to be world’s only outfit with B&R and vSphere behind a Meraki firewall - so whatever comes out of this, might be useful to someone else here where it can be found vs. buried as a support ticket…)

Absolutely agree you are not the only ones. Hopefully the case helps others also. 👍

Userlevel 7
Badge +7

Hi @kindzma , to be sure deprecated version of SSL/TLS are disabled on your Veeam environment you should check Windows registry.

You can find the keys to create in this Veeam helpcenter section for example: https://helpcenter.veeam.com/docs/backup/vsphere/best_practices_analyzer.html?ver=120

 

 

Userlevel 2

Hi @kindzma , to be sure deprecated version of SSL/TLS are disabled on your Veeam environment you should check Windows registry.

You can find the keys to create in this Veeam helpcenter section for example: https://helpcenter.veeam.com/docs/backup/vsphere/best_practices_analyzer.html?ver=120

 

Thank you!

Most of the listed keys are missing (not present) in the registry. Those that are present, are already set to the recommended values.

Should they be explicitly created? (If so, find it quite unusual and strange, especially given the absence of a specific NIST or other guideline recommending this, or any other justification. The NIST doc they’re linking to (NIST Special Publication 800-52 Rev 2, PDF) contains no such specific recommendations (of creating otherwise absent registry keys).

Is there a Microsoft doc advising the same?

Userlevel 2

So far the response from Meraki support was:

  • Are you sure the traffic is legitimate?
  • If so - just allow-list it

The thing is, I think it’s legitimate based on several criteria - yet can’t be 100% sure:

  • only affects devices that are supposed to talk (a lot) to each other, and no other devices. I.e. if there was a malware sitting on the Veeam appliance trying to infect other things on the network, we’d see these warnings from a lot more devices than just the four ESXis. (The appliance can talk to anything on the subnet.)
    • only affects a few devices (out of hundreds on the subnet): the Veeam B&R appliance, and four of our ESXi 7.0u3 hosts
    • B&R can see and talk to 10+ other ESXis on the network but doesn’t perform any backup ops on them - and I never see any warnings from / to those
  • appears to coincide with backup events (can’t be sure, haven’t found a good way to run analytics on the events)

Is there a process that would allow me be 100% sure that these are:

  • either false positives (i.e. Meraki erroneously flags TLS 1.2 connections as 1.0, 1.1, or SSL 3.0, with heartbleed maliciousness on top of it)
  • or indeed ESXis or the Veeam appliance unexpectedly do some deprecated TLS/SSL handshakes due to e.g. a stale library or module that should have been upgraded or deleted, but wasn’t?

Anything else I should ask Meraki support for?

Userlevel 7
Badge +20

These are definitely false positives as Veeam needs to communicate over these ports and protocols.  However, I noticed you are on v11 of the software and I know v12 deprecates some of the older TLS protocols so you might want to see about upgrading to the latest release and test.

Userlevel 2

However, I noticed you are on v11 of the software and I know v12 deprecates some of the older TLS protocols so you might want to see about upgrading to the latest release and test.

Have the latest v.12 ISO ready - just wanted to read up on any potential issues before doing the upgrade. (It’s a production server on physical hardware, i.e. no quick snapshots or restores should things go awry.)

Userlevel 7
Badge +20

However, I noticed you are on v11 of the software and I know v12 deprecates some of the older TLS protocols so you might want to see about upgrading to the latest release and test.

Have the latest v.12 ISO ready - just wanted to read up on any potential issues before doing the upgrade. (It’s a production server on physical hardware, i.e. no quick snapshots or restores should things go awry.)

Sounds good.  Just make sure you have a Configuration Backup and database backup stored somewhere for a rollback if needed.  There are no real big issues I have seen and we have had 12.1.1.56 deployed now in multiple datacenters for over a month now.  Keep us posted how the upgrade goes and if that helps with the Meraki alerts.

Userlevel 7
Badge +20

Hi,

 

Just seen this thread, quick question. If you do a packet capture, do you actually see any SSLv3 traffic as part of negotiations etc? Thinking a capture filter could help isolate nodes and ports to identify what’s doing this

Userlevel 2

If you do a packet capture, do you actually see any SSLv3 traffic as part of negotiations etc? Thinking a capture filter could help isolate nodes and ports to identify what’s doing this

In the pcap snapshot that Meraki provides via “inspect packet”, Wireshark shows no results when searching for “ssl.handshake.version” or even “ssl”.

 

(Haven’t done any packet capture otherwise.)

Thinking a capture filter could help isolate nodes and ports to identify what’s doing this

Nodes and ports seem to be already isolated: the sources are our ESXis, source ports are in the 50300-50400 range, destination - Veeam B&R v.11 appliance, port 902.

Userlevel 7
Badge +20

Well if you look here - Ports - User Guide for VMware vSphere (veeam.com) the port 902 is a required port for communication from VBR to ESXi.  I am pretty sure this traffic is safe.

Userlevel 7
Badge +20

Yeah the traffic itself is safe but I’d say just look into the negotiation and see which side is trying to suggest a lower form of TLS, once we know this you can enforce TLS 1.2 only on both sides then this should go away 🙂

Userlevel 2

Well if you look here - Ports - User Guide for VMware vSphere (veeam.com) the port 902 is a required port for communication from VBR to ESXi.  I am pretty sure this traffic is safe.

Right… 💡 (My bad, didn’t mention the port in the OP.)

Per Veeam docs, this seems to be legitimate traffic.

 

Yeah the traffic itself is safe but I’d say just look into the negotiation and see which side is trying to suggest a lower form of TLS, once we know this you can enforce TLS 1.2 only on both sides then this should go away 🙂

Suppose I could try a small manual backup, see if Meraki flags it, do it again with a pcap if it does, and then have some fun digging into packets… Aye-aye, cap’n, will do… (Unless I upgrade to v.12 first and the issue goes away on its own - as @Chris.Childerhose suggested it might...)

Userlevel 7
Badge +20

Sounds like a plan as your PCAP will contain all supported SSL protocols and ciphers, it could be something as simple as a cipher mismatch that Meraki thinks is trying to be a heart bleed attemp, otherwise you might find that one side is unexpectedly offering older SSL options despite your config. Let’s find out!

Comment