I have always had my MikroTik router VPN configured with Microsoft Azure using policy based VPN which uses IKEv1 so I wanted to move to an route based VPN leveraging IKEv2 which would give me more granular security and more control over routing.
Configuration was straight forward and found a good article that covered mostly everything from Mikrotik side found here , only missing was FW rules to allow traffic from Azure subnets to On-prem subnets. Weirdly I was able to ping from Azure VM to on-premises VM but not the other way around, even trace-route/ping from MikroTik itself to Azure VM was failing. The more disturbing part was that I was not seeing any traffic on the NAT rule nor on the Mangle rule configured so clearly the issue is from the MikroTik side not Azure.
After verifying the config time and time again, everything seemed to be normal and the tunnel was established successfully from both Azure and MikroTik. Found couple of support articles discussing the same problem but their resolution was different and did not apply to my scenario .
Anyhow, after checking the logs on the MikroTik side, I noticed that the VPN tunnel was having packet fragmentation issues especially around MTU and it hit me that I was testing NSX with servers connected directly to my MikroTik router so I had set the MTU on all interfaces to 1600.
After setting MTU to 1500 on the internal interfaces, everything started working properly. There are ways around this using mangle rules but that is not my thing so I just set MTU 1500 on the internal interface used for the VPN tunnel and kept 1600 on the interfaces connected to NSX ( introduced a switch in between which made it much more easier ).
IPSEC VPN from MikroTik to Azure will require an MTU of maximum 1500 unless mangle rules are used to identify and manipulate IKEv2 traffic to get it working properly. I prefer the easy way, dedicate one interface for the same and get it over with.