Home Blog

Advanced Security automation in Microsoft Azure


Microsoft is releasing many new security features and they are investing a lot, this is a very good sign of good commitment for a company.

Of of the major security features are Security Center, a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on premises.

The last arrived, Azure Sentinel, now in preview, it provides intelligent security analytics for your entire enterprise at cloud scale. Get limitless cloud speed and scale to help focus on what really matters. Easily collect data from all your cloud or on-premises assets, Office 365, Azure resources, and other clouds. Effectively detect threats with built-in machine learning from Microsoft’s security analytics experts. Automate threat response, using built-in orchestration and automation playbooks.

Behind the scenes, these features are actually providing security automation capabilities with amazing reporting, alerting, running books etcetera and we can do a lot more with the support of the open source world.

Azure provides great automation capabilities using automation account and blueprint, and other great options are Containers.

I think the containerization is a very powerfull option, in azure I use Service Fabric and Docker, in different compbinations, depending on the usage.

One good example is the using of great security tool like OWASP ZAP, it provides a very extensive API interface and Docker images.

For Docker hosting we have several options like Virtual Machine, Service Fabric, and we can use Azure Container Registry with Azure CI.

In docker we can install ZAP using the command below.

docker pull -t sfpsecuritycontainer.azurecr.io/owasp/zap2docker-stable

We can also use the weekly distribution option for a continuos update.

Run ZAP using the command below

docker run -u zap -p 8080:8080 -i owasp/zap2docker-stable zap.sh -daemon -host -port 8080 -config api.disablekey=true -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true

We can directly browse the API

And try and execute them directly from UI and test them before the usage.

We can scan targets and produce our logs and push our logs in the Azure Workspace, this is to integrate our security tools into assets like Security Center and Sentinel.

HACKAZURE – Azure Bastion – What you need to know


Azure Bastion let you connect via RDP and SSH to any VM in Microsoft Azure using the private IP addresses and avoiding any public exposure.
You can find many blog posts in interne and to install bastion, you can follow these the simple steps in the article below


Azure Bastion has these main requirements:

  1. You must have a VNet
  2. In the VNet you must have a subnet named AzureBastionSubnet
  3. No route tables or delegations in the Bastion subnet
  4. You must use a subnet of at least /27 CIDR.

The only main restriction I see is the using of a subnet with a specific name because in a complex and well organised network scenario, all the address spaces are catalogued and organised and the creation of a new subnet may require some changes on it.

The using of a new VNet and peering is not an applicable option because the VNet hosting the VM requires a subnet named AzureBastionSubnet anyway.

After some tests, below the applicable options.

New VNet

Of course, this is the simpler and the fastest, all articles mention that you create a new VNet with a subnet AzureBastionSubnet and you install your VMs in that VNet + Azure Bastion.
This method is easy, and it is a good option in case of a new VNet, but there is an important aspect to consider here, Azure Bastion requires a /27 subnet, which is an important amount of IP addresses (32).
I don’t really like the idea of using 32 IPs for that, especially because it can be reserved for many other critical scopes, but if you have small, medium company this is very applicable.

Creation of a new subnet in an existing VNet

I don’t like this idea but you can actually to do that using this option, and I will explain to you why I don’t like it.
You can change the address space of your VNet and add more space in your network, after that you can add the new subnet named AzureBastionSubnet.

As I mentioned before, this is not easily applicable in a well organised network infrastructure because the addresses spaces have been already assigned to regions and departments and this method will require an important effort for the network team.

If you don’t have this problem, then I would say that maybe you still need to organise your network strategy and I would recommend you to do that asap.

Dedicate an address space to Azure Bastion

This is, in my opinion, the best option for using Azure Bastion.

We dedicate a specific IP schema for Azure Bastion, for example, the 192.168.*.*, and we use these IP schemas for that only.

As you can see below, you can add the new addresses space to you VNet

And you can create the AzureBastionSubnet subnet

This is the best way to use Azure Bastion because you dedicate a specific IP range to that and you touch you network design at all.

HACKAZURE – How To – Global scan of all public IP addresses on Azure


This article assumes the reader has a basic understanding of Azure security and governance.

Many companies are exposing public IP and ports from Microsoft Azure, and many of them are unaware of that, a good practice is to check what we have open and why.

The port scanning of an Azure infrastructure can be very challenging, especially in global and enterprise scenarios where many Enterprise Agreements and subscriptions are involved.
We have some options to solve this problem:

The first option is using PowerShell/Az CLI and create a script to list all ports opened by network security groups and firewalls, but I see a big problem on using this option.
Companies and department can decide different security firewalling strategies, it can be a network security group or Fortigate, for example.

The second option can be using Azure APIs, which honestly is pretty powerful, and we work in JSON format for data that is very serializable and easy to manipulate, but also, in this case, we still have the same problem,
we are not sure about what is protecting the IP exposing the port.

The best solution is performing an active scan in all IP we find and using a proper tool like NMAP to do that, the reason for that is because I want to be sure about the ports open and I don’t want to depend by any API.

I consider NMAP one of the most powerful hacking tools which tons of options, not just scanning but also for advanced pentest operations, I recommend you to look at it.

Because of the complexity of a global Azure environment, the two main problems I see are about the reconnaissance and collecting all data I need and, second is the mapping between the IP and ports scanned and the Azure resource related, actually NMAP provides the possibility of scanning a list of IPs but how to map the hundreds of IP provided by NMAP with the related Azure resources?

For that reason, I decided to integrate active hacking capabilities in Aziverso, which already has very powerful reconnaissance features and mix them together.

To access on any Azure resource, you need to setup an Azure service principal account or using an existing one, you can find all info in https://aziverso.com, and you associate the account to the scope area you want to scan, it can be a management group or multiple subscriptions.

If you like to scan the entire Azure Tenant you need to execute a privileged escalation to Global Admin and add the account to the Tenant Root Group

To obtain the full visibility you also need to escalate in the Azure AD Access management for Azure resources, set it to YES

Log off and back again and you will see any Azure resource, subscription or management group associated with this tenant, they can be even related to other Enterprise Agreements and contracts, it doesn’t matter.

For privacy reasons, I am going to use my personal Azure environment, which actually contains five different subscriptions, I did the same operation with hundreds of subscriptions and different Azure EA without any issue.

First, I execute Excel, and from Aziverso Scaffold, I scan the tenant to retrieve the list of all subscription I see.

Now I have my surface attack, and it can also be a list of hundreds of subscriptions.

Now I use the Recon to retrieve all Public IP, the recon is a crazy amazing feature, it lists more than 2000 Azure APIs, and I can map the Excel sheets to map the values I want to pass to the API

Now I have the list of all public IP

The blanks are not actually used of assigned so we can easily filter these using the Excel powerful filter and eliminate them from the list

Now let do the magic, and it is time to scan!

In Aziverso I select Attack in the new Hacking area, and it will open the new feature

I select the type of attack I want to perform, in this case, an IP scan and it will use the powerful NMAP capabilities for that.

I clean up all parameters and press Scan (I am planning for a smart parameters mapping, no time for that now ), go and take a good lunch + a coffee

And here we have all the results!

The great thing is also the possibility to have the link of the specific Azure resource scanned, and I can directly jump into the portal from Excel.

I am still working on this feature, and I will release in public very soon, feel free to contact me for any question and especially for collaboration, any kind of collaboration, code, design, hacking.

I have in plan many other hacking implementations like including advanced NMAP scans, Metasploit, Social-Engineer Toolkit, and with the inclusion of Windows Subsystem for Linux (WSL2) ,the pentest and hacking potentials are endless.
For example, performing a scan on all the App Service endpoints in the entire tenant.

stay tuned! And ping me!

13 years MVP recap and plans


Since 2006 the first of July became one of the most important days for me, it is the MVP renewal day, working as usual and waiting for the magic email from Microsoft.

It is always emotional to see the email in my inbox and it remembers me the first time I saw it, that memory is always with me.

13 years of many adventures and challenges, since BizTalk Server, Integration, and now Microsoft Azure, a professional evolution made by nights of development, writings, events and speeches, many new friends, nice and bad moments.

My first BizTalk community in 2006

The immortal Italian Gladiator team

The magic BizTalk Crew

So many community activities and adventures we did together and we are still doing…

The viral Chicken Way philosophy

All amazing friends and people around the world, always available and ready to help me in the good and bad moments

A new family in Portugal

The fun

It is great to be Microsoft MVP, you are member of a big family, all people with a common passion for community and technologies.

My next plans, well… Azure, Azure and Azure…, in the last two years I have been lucky because I am working in one of the biggest Azure infrastructures around the globe and I am learning so much.

This year will be the year of Azure Governance, it is the most interesting topic for me and, in my opinion, the most complex.

I have to plan different community activities and events, a book and a training course, it will be another great year!

You are welcome to join the MVP family at any time, you can find much information in the MVP Website and you can ask any MVP around the world.

I want to thank Microsoft and the MVP Award program for giving me the great opportunity and honour to spend another great year in this amazing family.

Install Kali on Windows 10 and interop commands


Having a Kali Linux running in our Windows workstation can be a plus for many reasons, one of these is the possibility of using penetration tools and capabilities directly from windows apps.

To install WSL2 open PowerShell as administrator and run the command below


After we will need to restart the system and when back we can install Kali Linux from the Microsoft Store

After that I recommend to update the OS and packages.

sudo apt-get update

sudo apt-get dist-upgrade

Now we can check all software installed using the command below

sudo apt-get install aptitude

aptitude -F’ * %p -> %d ‘ –no-gui –disable-columns search ‘?and(~i,!?section(libs), !?section(kernel), !?section(devel))’

Something I really love is the interaction between Windows and Linux, we can now actually execute Linux commands from Windows command prompt and using all the Linux amazing features.

For example, list all windows folders from a command prompt as below

We can easily use amazing Linux commands like the grep one!

dir | bash -c “grep Prog”

We can also invoke Windows commands from Kali Linux.

and we can execute Windows programs

Think now about the amazing possibilities we have, WSL is great.

Cloud vs Datacentre – what you need to know before moving your solution or company into the cloud


Is a datacentre provider cheaper or better than a cloud provider?

Speaking without a clue about your solutions, more you are cloud-friendly, and much more you save costs.
On IaaS your solution is completely hosted in VM, in that case, a datacentre option “could be” cheaper than the cloud.
I said, could be, because there is also another factor to consider like support and related services provided by the cloud solution which maybe not provided by the datacentre, so you need to evaluate very carefully with the datacentre provider.

Never do estimation for single assets like virtual machine or database or web site, we need to evaluate the solution and the entire scenario, especially in enterprise solutions and for long term strategies.

Being cloud-friendly means your solution joins the cloud architecture, which can be Microsoft Azure, AWS or even Google, and to achieve that you need to consider what the specific cloud solution can offer you in respect of your solution architecture.

When we decide to push our solution in the Cloud, we need to consider one very important aspect, which is the long term investment.

In the 90% cases we will be able to migrate many of the technologies we are using into Cloud assets, like for example SQL Server database into SQL Azure or Internet information Server into Azure App Service hosting and much more.

A lift and shift with a gradual rollover is always the best option, which means move your solution into the cloud like that and start reengineering pieces one by one and moving the responsibility from you to the Cloud provider.

I said responsibility because the costs in our solution are almost depended by responsibilities zones.

The diagram below will show you how it works:

The blue area is our domain as a customer, what we need to manage by our self.

SaaS (Software as a Service) is completely managed by the provider, see Office 365 for example, PaaS (Platform as a service) is partially managed by the provider, like App Services or SQL Azure, and IaaS (Infrastructure as a Service) is almost managed by the customer.

I can put a datacentre solution in the IaaS level, so if we need to evaluate a pricing offering from a datacentre provider, we also need to be sure they include all missing responsibilities between the layers.

A most important aspect is also the security, which in term of responsibility, defines how much we can define and design a secure perimeter around our solution.

Another big aspect is the governance, don’t move into the cloud without a clear idea about how a cloud environment works.
Each cloud provider is different, and they have different rules and governance strategies, a cloud environment provides many features and way to manage costs, security, resources and more.

In my opinion, the technology part is the very easy aspect; today we almost have everything we need to solve any problem; it is just a matter of googling….

The most complex area is the governance, how to define our business strategy, how to organise our entire cloud infrastructure, the security, the networking, the support, the costs and a lot more.

If you are thinking to move your company or your solution into the cloud, my best recommendation is to work with a cloud expert able to provide you with the best solution for your governance and strategies, and most important, a person with experience in the field.

The cloud is a very complex topic, especially governance, and the experience is the real key value for any person working on it.

Azure management groups and good governance insights


Management group are a key aspect in the Azure management and governance and not many people know about that, and Microsoft released this feature many months ago.

Management Groups provide the possibility to organise our subscriptions in sub groups, everything we do will affect all the subscriptions and sub groups.

This is very useful for many aspects.

  • We can manage RBAC and policies from on place into multiple subscriptions.
  • We can monitor the cost usage for multiple subscriptions very easy
  • We can organize our department and business much better

How you can manage Azure and save your money without them?
Management group it is a must!

Below how we need to look at our Azure governance

Think about a simple scenario as below

Let put some more interesting aspects like a base subscription with our shared appliances

And now using management groups for a much better governance

How to use them?

As usual, simply look for management group in the azure portal.

Create a new management group and assign your subscription into it, extremely simple.
To move a subscription click on the right

Let be focus about the important things to know.

On top you see

Click on details link to manage the management group, like RBAC etc…

The tenant group is the root one and it is associated to your azure tenant (this is very important to understand)

Everything you do at root level will affect all the subscription and all Azure EA in that tenant

Same thing is with policies, if you disable a policy at root level then it will affect all the subscriptions and azure EA in that tenant.

Maybe in the future this design will change and it will keep tenant and management groups more separated, I am sure they will do.

I mention policies because these are another must to know and we manage policies from management groups.

I will write something about policies, there is so much to say, but let stay focus on the important things, how Azure policies work?

The rule is very simple, the disable win over the enable one.

Actually, if we want to disable a policy we need to fist disable from the top and enable the bottom one.

So… we need to disable from root and enable into the bottom group, it is actually a top down approach where disable win.

As I said, maybe this design will change soon…

Advanced troubleshooting and security check on Elastic Search cluster


These are some quick notes to fix some of the major issues quickly.

The Elastic Search Cluster (ESC) is not responding

To test this issue log into the ESC into the load balancer IP and port 9200, the address should be like below

At the user prompt enter, the username and password and you should receive a response like below

If not the restart the ESC in this order

Stop all nodes

Start the Data-X nodes

Start the Master-X nodes

Start the Kibana node

Retry to log into ESC

ESC also responds to the 5601 port on the Kibana node; the URL should be like below

This is the Kibana admin portal.

It is important to know that the Kibana admin portal and the cluster data search are not correlated, I mean that the admin portal can be up, but the data nodes are not reachable by the app.

If you are using ESC from another Web app, then try to ping the ESC as below

Enter in the App Service and open the Kudu console


Execute a tipping of the internal load balancer


If you don’t get any reply then restart the ESC as previous instructions, most of the time it solves the problem.




Aziverso reaches the 2223 Azure API calls for more advanced researches, troubleshooting and saving costs in Azure


Last evening, I worked in some improvements, and I release a new API library for Aziverso.
Actually, the tool is now able to list up to 2223 Azure API calls, and we can use all of these calls to extend our daily researches and for advanced troubleshooting in Azure.

I can create a long list of combinations we can do using this feature, from saving costs to security, and the list now contains also all the APIs in the preview.

A quick, simple example I just used 10 minutes ago, we can easily find all the unattached disks in our Azure EA

And get the list ready in Excel, this is huge saving time and costs.

Aziverso is a free community tool for Azure, and you can find it below


Aziverso 3.0.2 new features and bugs fixed


New release version 3.0.2 and this version includes some interesting new features and bugs fixed.

Two new cost reports:

Custom Departments by Region, this report provides the total amount for each internal custom department for each region.

Custom Departments by Location, this report provides the total amount for each internal custom department for each location in each region.

The reports now include the department’s Owner, in this way it is much easier for the financial operator to find and contact the reference.

Improved the searching in the Naming Standard Assistant

Bugs fixed:

Removed some error messages noises for the empty subscriptions during the group cost statistic

Fixed the error for missing template in the Naming Standard and the double message issue.

More information about Aziverso below