Category: Microsoft
Category Archives: Microsoft
Native JSON support now in preview in Azure SQL Managed Instance
Processing JSON data in Azure SQL Managed Instance just got more performant thanks to the new way JSON data is stored and handled. Now in preview for Azure SQL Managed Instance with Always-up-to-date update policy configured, JSON data can be stored in a new binary data format with database column declared as a new JSON data type:
CREATE TABLE Orders (order_id int, order_details JSON NOT NULL);
All existing JSON functions support the new JSON data type seamlessly, with no code changes. There are also a couple of new aggregate functions:
1. Constructing a JSON object from an aggregation of SQL data or columns:
SELECT JSON_OBJECTAGG( c1:c2 )
FROM (
VALUES(‘key1’, ‘c’), (‘key2’, ‘b’), (‘key3′,’a’)
) AS t(c1, c2);
2. Constructing a JSON array from an aggregation of SQL data or columns:
SELECT TOP(5) c.object_id, JSON_ARRAYAGG(c.name ORDER BY c.column_id) AS column_list
FROM sys.columns AS c
GROUP BY c.object_id;
For a quick introduction you can watch a short video explaining the very same functionality on Azure SQL Database:
Resources:
JSON data type (preview) – SQL Server | Microsoft Learn
JSON_OBJECTAGG (Transact-SQL) – SQL Server | Microsoft Learn
JSON_ARRAYAGG (Transact-SQL) – SQL Server | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Ignite: Don’t wait for the future—invent it
From November 18-22, 2024, join thousands of other curious and inspired minds at Microsoft Ignite to learn what’s possible with AI. Explore tools, build skills, and form partnerships to grow your business and reach more customers—safely, securely, and responsibly.
In addition to a first look at the latest AI technology, you’ll have built-in time to connect with peers and industry experts, network with Microsoft leaders, explore co-sell opportunities, and get all the details on how the Microsoft AI Cloud Partner Program helps you innovative and grow with updated benefits and offerings.
Whether you’re aiming to build a technical roadmap, a brain trust of collaborators and innovators, or a plan for growing your business with Microsoft, you can do it at Ignite. Your business, our shared customers, and organizations across the world are counting on it. Register today. Spots are limited for in-person attendees, and we don’t want you to miss this.
Register for Ignite
Microsoft Tech Community – Latest Blogs –Read More
Things I learned as a member of the Microsoft Community
For over 15 years at Microsoft my passion focused on amplifying the voices of our product users. No matter if it was via our EAP (early adopter programs), MVPs, Tech Community members, STEP, MODE, IT, it did not matter because their voices equally and importantly reflected every user. I am still part of the Microsoft Community. I speak onsite at events, participate in online conferences, and post to different tech communities – like this one. That may seem like a lot of time to dedicate to helping others so let me tell you what you get in return. Here are four important things I have learned along the way:
“Together, Apes Strong”- Rise of the Planet of the Apes
aka The Power of the Community
Before joining Microsoft, I was an MVP. I got to see firsthand what happens when people are enthusiastic about a product or service they use every day.
Communities have the power to bring people together from all corners of the world, creating a sense of belonging and support. These communities allow individuals to share their experiences, knowledge, and passions with others who have similar interests. This exchange of information allows its members to gain new perspectives and insights from diverse viewpoints.
Additionally, online communities provide a platform for individuals to find support and encouragement, whether they are facing personal challenges or pursuing their goals. Finally, when enough people start to discuss a problem about a product or service, together, their voices are heard, and change happens. Be a change agent within the change agency – the best community in tech.
“Danger Will Robinson, Danger!”- Lost in Space
aka Learning from the Others’ Mistakes
Since starting my own company – StephenLRose.com, I now work with clients across a variety of verticals from finance and pharmaceutical to manufacturing and services. What I have learned: No matter how well versed you are with any product, individuals and organizations have some “unique” way of using a product in a way that no one else has before. One instance was when a company told there users to store all their documents in the Windows sub-folder because it was more secure than in My Documents (what became OneDrive) since hackers “wouldn’t look there.” Sigh.
I shared this story with the community to see if there was any president and truth to this but more so, to understand the dangers of doing this. I got a ton of great answers back and provided a more well-balanced answer than I could by myself. To be honest, I was initially gob smacked by this “security practice.” but with insights from the community, I was able to create a coherent “guidance-based” response to them.
I am sure that so many of you have stories, logical and illogical to share with the community that would help them either avoid the mistakes made by others and more so, how to avoid it. I highly encourage sharing your stories. Feel free to at-mention me, or if it’s a juicy, tricky one, “at-mention everyone” (aka, the community) to help you work through it.
“Darmok, whose arms were wide”- Star Trek Next Generation (S5 Ep.2)
aka The Value of Sharing Your Knowledge
I am old. I remember using 56k x2 modems, PCMCIA slots, IOMEGA disks, Cheetah Fastback and using DC PROMO in the Terminal to promote a Server from BDC (Backup Domain Controller) to PDC (Primary Domain Controller). But with that comes years of experience and understanding. I can comfortably contribute to a chat with everyone from the CEO to the Backup to the Assistant Administrator because I have done all their jobs. Just like in a restaurant, the manager who has worked at every job and every station in the restaurant gets respect because they have been there.
Sharing your knowledge is so important to build a strong community. That can be through writing a blog, doing a podcast, doing a talk at a local user group, a larger M365 or TechCon365 Conference or in the Tech Community support forums. You are the ones whose jobs depend on knowing these software packages inside and out. This is your chance to help others become successful and in return, be there for you when you need help with a new product because, as Ferris Buller once said, “Life moves pretty fast.”
“By Grabthar’s hammer, by the suns of Worvan, you shall be avenged!”- Galaxy Quest
aka Community has your back
At conferences I like to create and attend after-hours meet-ups with names like Copilot Lessons Learned, Adoption: Share Your Tales or Terror, or my personal favorite, Have A Cigar/Share Your Frustration Meet-Up. Find opportunities to share; the community loves when you share your story.
Its evenings like this I am reassured that the community is alive for the right reasons. When one of us shares their secret sauce, it encourages others to do the same. I can’t tell you how many times people have been surprised when they have asked me to share something on social for them. Maybe it’s asking if I can speak to their 50-person user group on the East coast via Teams or be a guest on their podcast. I love doing this and doing this in return for others. Why, because we have each other’s back. If a troll comes after one of us, we will respond in vast numbers and push them under the bridge from whence they came! If you have a session at a conference, we will help fill those seats and share decks we have done in the past to help accelerate and reaffirm.
“And may the force be with you, always”- Star Wars – A New Hope
The day I left Microsoft after 15 years was a hard one for me. After posting my thoughts and thank you’s on LinkedIn, I was amazed at how many people thanked me for helping them on their journey to success through my talks, one to one time with them, my blogs or webcasts. It really helped during a tough time. Then it was those same folks reach out to help me to connect and even offer me projects. Community had my back after all those years of being there for them. And I am eternally grateful.
Like Luke and Han getting a medal, it was a moment that made me reflect on my journey. The lifelong friendships I have made, community members we’ve lost that we think of everyday and the feeling that I got this because so many people have got me.
Thank you, all my friends, and may the force be with you, always.
— Stephen Rose
About Stephen
Stephen has been helping with companies all over the world to plan, pilot, deploy, manage, secure, and adopt products including Microsoft 365, Teams, and Copilot as well as a variety of AI tools and 3rd party products.
Stephen was a business owner for many years and an MCT and MVP before he became part of Microsoft in 2009. While working there, he oversaw IT pro training and content for Windows, OneDrive, Office, Teams, and Copilot until he left in 2023.
Currently he is consulting with a variety of customers, helping them manage change and new work methods by showing companies how to use the tools they have today more effectively and get ready for the AI tools they will need to stay ahead.
Check out all the great videos featuring members of our community on his website at StephenLRose.com/videos. Here’s a sample from Stephen’s show, UnplugIT, “Unlocking AI’s Potential in SharePoint: A Conversation with Richard Harbridge (CTO at 2toLead)”:
Visit StephenLRose.com to learn more:
• Find him on X: @StephenLRose
• LinkedIn: linkedin/in/StephenLRose
Microsoft Tech Community – Latest Blogs –Read More
Building Bronze Layer of Medallion Architecture in Fabric Lakehouse using WAL2JSON
Introduction
If you work in data engineering, you may have encountered the term “Medallion Architecture.” This design pattern organizes data within a Lakehouse into distinct layers to facilitate efficient processing and analysis. Read more about it here. This is also a recommended design approach for Microsoft Fabric. To make a Lakehouse usable, data must pass through several layers: Bronze, Silver, and Gold. Each layer focus on progressively enhancing data cleanliness and quality. In this article, we will specifically explore how to build the bronze layer using real-time data streaming from existing PostgreSQL databases. This approach enables real-time analytics and supports AI applications by providing a real time, raw, and unprocessed data.
Image source – https://www.databricks.com/glossary/medallion-architecture
What is Bronze Layer?
This layer is often referred to as the Raw Zone, where data is stored in its original format and structure. According to the common definition, the data in this layer is typically append-only and immutable, but this can be misunderstood. While the intention is to preserve the original data as it was ingested, this does not mean that there will be no deletions or updates. Instead, if deletions or updates occur, the original values are preserved as older versions. This approach ensures that historical data remains accessible and unaltered. Delta Lake is commonly used to manage this data, as they support versioning and maintain a complete history of changes
PostgreSQL as the source for Bronze Layer
Imagine you have multiple PostgreSQL databases running different applications and you want to integrate their data into a Delta Lake. You have a couple of options to achieve this. The first approach involves creating a Copy activity that extracts data from individual tables and stores it in Delta tables. However, this method requires a watermark column to track changes or necessitates full data reloads each time, which can be inefficient.
The second approach involves setting up Change Data Capture in PostgreSQL to capture and stream data changes continuously. This method allows for real-time data synchronization and efficient updates to OneLake. In this blog, we will explore a proof of concept for implementing this CDC-based approach.
How to utilize PostgreSQL logical decoding, Wal2json and Fabric Delta Lake to create a continuously replicating bronze layer?
We will be utilizing PostgreSQL logical replication, Wal2Json plugin and PySpark to capture and apply the changes to delta lake. In PostgreSQL, logical replication is a method used to replicate data changes from one PostgreSQL instance to another or to a different system. Wal2json is a PostgreSQL output plugin for logical replication that converts Write-Ahead Log (WAL) changes into JSON format.
Setup on Azure PostgreSQL
Change following server parameters by logging into Azure portal and navigating to “Server parameters” of the PostgreSQL service.
Parameter Name
Value
wal_level
logical
max_replication_slots
>0 (e.g. 4 or 8 )
max_wal_senders
>0 (e.g. 4 or 8 )
Create publication for all the tables. Publication is a feature in logical replication that allows you to define which tables’ changes should be streamed to subscribers.CREATE PUBLICATION cdc_publication FOR ALL TABLES;
create a replication slot with wal2json as plugin name. A slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database. Note – Wal2json plugin is pre-installed in Azure PostgreSQLSELECT * FROM pg_create_logical_replication_slot(‘cdc_slot’, ‘wal2json’);
You can test if the replication is running by updating some test data and running following command.SELECT * FROM pg_logical_slot_get_changes(‘cdc_slot’, NULL, NULL,’include-xids’, ‘true’, ‘include-timestamp’, ‘true’)
Now that you have tested the replication, let’s look at the output format. Following are the key components of wal2jobs output followed by an example.
Attribute
Value
xid
The transaction ID.
timestamp
The timestamp when the transaction was committed.
kind
Type of operation (insert, update, delete).
schema
The schema of the table.
table
The name of the table where the change occurred.
columnnames
An array of column names affected by the change.
columntypes
An array of column data types corresponding to columnnames.
columnvalues
An array of new values for the columns (present for insert and update operations).
oldkeys
An object containing the primary key or unique key values before the change (present for update and delete operations).
For INSERT statement
{
“xid”: 8362757,
“timestamp”: “2024-08-01 15:09:34.086064+05:30”,
“change”: [
{
“kind”: “insert”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“columnnames”: [
“EMPLOYEE_ID”,
“FIRST_NAME”,
“LAST_NAME”,
“EMAIL”,
“PHONE_NUMBER”,
“HIRE_DATE”,
“JOB_ID”,
“SALARY”,
“COMMISSION_PCT”,
“MANAGER_ID”,
“DEPARTMENT_ID”
],
“columntypes”: [
“numeric(10,0)”,
“text”,
“text”,
“text”,
“text”,
“timestamp without time zone”,
“text”,
“numeric(8,2)”,
“numeric(2,2)”,
“numeric(6,0)”,
“numeric(4,0)”
],
“columnvalues”: [
327,
“3275FIRST NAME111”,
“3275LAST NAME”,
“3275EMAIL3275EMAIL”,
“3275”,
“2024-07-31 00:00:00”,
“IT_PROG”,
32750,
0,
100,
60
]
}
]
}
For UPDATE statement
{
“xid”: 8362759,
“timestamp”: “2024-08-01 15:09:37.228446+05:30”,
“change”: [
{
“kind”: “update”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“columnnames”: [
“EMPLOYEE_ID”,
“FIRST_NAME”,
“LAST_NAME”,
“EMAIL”,
“PHONE_NUMBER”,
“HIRE_DATE”,
“JOB_ID”,
“SALARY”,
“COMMISSION_PCT”,
“MANAGER_ID”,
“DEPARTMENT_ID”
],
“columntypes”: [
“numeric(10,0)”,
“text”,
“text”,
“text”,
“text”,
“timestamp without time zone”,
“text”,
“numeric(8,2)”,
“numeric(2,2)”,
“numeric(6,0)”,
“numeric(4,0)”
],
“columnvalues”: [
100,
“Third1111”,
“BLOB”,
“SKING”,
“515.123.4567”,
“2024-08-01 00:00:00”,
“AD_PRES”,
24000,
null,
null,
90
],
“oldkeys”: {
“keynames”: [
“EMPLOYEE_ID”
],
“keytypes”: [
“numeric(10,0)”
],
“keyvalues”: [
100
]
}
}
]
}
For DELETE statement
{
“xid”: 8362756,
“timestamp”: “2024-08-01 15:09:29.552539+05:30”,
“change”: [
{
“kind”: “delete”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“oldkeys”: {
“keynames”: [
“EMPLOYEE_ID”
],
“keytypes”: [
“numeric(10,0)”
],
“keyvalues”: [
327
]
}
}
]
}
Create OneLake in Fabric. For detailed instruction see this.
Create a delta table with initial load of the data using Spark.
# PostgreSQL connection details
jdbc_url = “jdbc:postgresql://your_postgres_db.postgres.database.azure.com:5432/postgres”
jdbc_properties = {
“user”: “postgres”,
“driver”: “org.postgresql.Driver”
}
# Read data from PostgreSQL employees table
employee_df = spark.read.jdbc(url=jdbc_url, table=”employees”, properties=jdbc_properties)
# Define the path for the Delta table in ADLS
delta_table_path = “abfss://your_container@your_storage_account.dfs.core.windows.net/delta/employees”
# Write DataFrame to Delta table
employee_df.write.format(“delta”).mode(“overwrite”).save(delta_table_path)
delta_df = spark.read.format(“delta”).load(delta_table_path)
delta_df.show()
Now running the following code continuously will keep the data in delta lake in sync with the primary PostgreSQL database.import json
from pyspark.sql import SparkSession
from pyspark.sql.functions import lit
from delta.tables import DeltaTable
import pandas as pd
# PostgreSQL connection details
jdbc_url = “jdbc:postgresql://your_postgres_db.postgres.database.azure.com:5432/postgres”
jdbc_properties = {
“user”: “postgres”,
“driver”: “org.postgresql.Driver”
}
#Delta table details
delta_table_path = “abfss://your_container@your_storage_account.dfs.core.windows.net/delta/employees”
delta_table = DeltaTable.forPath(spark, delta_table_path)
delta_df = spark.read.format(“delta”).load(delta_table_path)
schema = delta_df.schema
loop
cdc_df = spark.read.jdbc(url=jdbc_url, table=”(SELECT data FROM pg_logical_slot_get_changes(‘cdc_slot’, NULL, NULL, ‘include-xids’, ‘true’, ‘include-timestamp’, ‘true’)) as cdc”, properties=jdbc_properties)
cdc_array = cdc_df.collect()
for i in cdc_array:
print(i)
changedData = json.loads(i[‘data’])[‘change’][0]
print(changedData)
schema = changedData[‘schema’]
table = changedData[‘table’]
DMLtype = changedData[‘kind’]
if DMLtype == “insert” or DMLtype == “update”:
column_names = changedData[‘columnnames’]
column_values = changedData[‘columnvalues’]
source_data = {col: [val] for col, val in zip(column_names, column_values)}
print(source_data)
change_df = spark.createDataFrame(pd.DataFrame(source_data))
if DMLtype == “insert”:
change_df.write.format(“delta”).mode(“append”).save(delta_table_path)
if DMLtype == “update”:
old_keys = changedData[‘oldkeys’]
condition = ” AND “.join(
[f”target.{key} = source.{key}” for key in old_keys[‘keynames’]]
)
print(condition)
delta_table.alias(“target”).merge(
change_df.alias(“source”),
condition
).whenMatchedUpdateAll().whenNotMatchedInsertAll().execute()
if DMLtype == “delete”:
condition = ” AND “.join([
f”{key} = ‘{value}'”
for key, value in zip(changedData[“oldkeys”][“keynames”], changedData[“oldkeys”][“keyvalues”])
])
delta_table.delete(condition)
end loop
Conclusion
In conclusion, building the Bronze layer of the Medallion Architecture using wal2json from PostgreSQL as the source to Fabric OneLake provides a robust and scalable approach for handling raw data ingestion. This setup leverages PostgreSQL’s logical replication capabilities to capture and stream changes in real-time, ensuring that the data lake remains up-to-date with the latest transactional data.
Implementing this architecture ensures that the foundational layer is well-structured and becomes a solid layer for next layers while also supporting real-time analytics, advanced data processing and AI applications.
By adopting this strategy, organizations can achieve greater data consistency, reduce latency in data processing, and enhance the overall efficiency of their data pipelines.
References
https://learn.microsoft.com/en-us/fabric/onelake/onelake-medallion-lakehouse-architecture
https://learn.microsoft.com/en-us/azure/databricks/lakehouse/medallion
https://blog.fabric.microsoft.com/en-us/blog/eventhouse-onelake-availability-is-now-generally-available?ft=All
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-and-delta-tables
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please send an email to Database Platform Engineering Team.
Microsoft Tech Community – Latest Blogs –Read More
Partner Case Study Series | Siemens
Combining the real and the digital world
To understand the physical world, it can help to abstract it, and view it through a digital lens. Siemens AG focuses on technological solutions that help its customers identify and solve the big challenges across multiple industries. From infrastructure, to transportation, to healthcare, Siemens empowers its customers to transform their markets, as well as the everyday lives of billions of people. In fact, Siemens Head of Product Management for Product Lifecycle Management software, Ales Alajbegovic says, “We are providing industrial software for design and manufacturing. There is pretty much no company in the world that doesn’t use our software when it comes to these areas.”
And according to Siemens’ Global Alliance Leader for Microsoft, John Butler, those software use cases are ever expanding. “That’s everything from working with our customers to reduce drag on an automobile or an airplane to improving manufacturing efficiency or helping design the newest product. At the end of the day, what we’re trying to do is figure out how to expedite that manufacturing process and that development process to get products to market faster for our customers.”
Full visibility, from start to finish
There’s increasing pressure on businesses to review every phase of the product lifecycle for cost savings, schedule reductions, and other risk factors. Too often, problems come up on the manufacturing floor that are never addressed, causing a cascade effect on productivity across the line. To address these industry issues, you need a remarkable solution from an organization with a tenure to match.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Auto completar datos de clientes
Hola buen día, en mi negocio tengo varios clientes y varios proveedores, llevo un control general de los suministros de cada cliente con fecha, volumen, producto suministrado, que proveedor dio servicio y otros datos más.
Me interesa tener otro archivo por cliente y cada que yo llene en la hoja general se pasen en automático ciertos datos a la pestaña del cliente, hay forma de lograr esto?
Gracias y espero comentarios.
Hola buen día, en mi negocio tengo varios clientes y varios proveedores, llevo un control general de los suministros de cada cliente con fecha, volumen, producto suministrado, que proveedor dio servicio y otros datos más.Me interesa tener otro archivo por cliente y cada que yo llene en la hoja general se pasen en automático ciertos datos a la pestaña del cliente, hay forma de lograr esto? Gracias y espero comentarios. Read More
Line style and color for pivot chart with multiple legend items.
I have a pivot chart with 2 entries in Legend (Series). I would like to have the first entry determine the line style (solid, dashed, dotted, etc ) and the second one determine the line color.
So if for instance the first entry has 4 values and the second one has 5, I would expect 4 different styles and 5 different colors for the 20 lines. Instead I get 20 different colors, which is not a very useful way of representing multi-dimensional data. Is there a way to change this?
I have a pivot chart with 2 entries in Legend (Series). I would like to have the first entry determine the line style (solid, dashed, dotted, etc ) and the second one determine the line color.So if for instance the first entry has 4 values and the second one has 5, I would expect 4 different styles and 5 different colors for the 20 lines. Instead I get 20 different colors, which is not a very useful way of representing multi-dimensional data. Is there a way to change this? Read More
Cash in Microsoft Incentives !
Hey amazing ISV community !
We’ve got something awesome coming up, and I wanted to get you in the loop! On September 30th, we’re hosting a webinar all about Microsoft incentives for FY 2024-2025, and trust me, this is a must-attend for any ISV out there.
Why this is going to be 🔥:
For ISVs who aren’t transactable yet: This webinar is exactly what you need. We’re talking actionable steps to unlock revenue by getting onto the Marketplace. The final push you need to go from “thinking about it” to seeing real $$.For ISVs who are already transactable: It’s all about doubling down. We’ll show you how to take full advantage of Microsoft’s incentives and commit more to the Marketplace for even bigger returns.
Webinar Details:
Date: September 30thTime:Morning Session: 10:00 AM – 10:30 AM (CEST) [Link to register]Afternoon Session: 6:00 PM – 6:30 PM (CEST) [Link to register]Topic: Strategies for Leveraging Microsoft Incentives in FY 2024-2025
This webinar is tailored to help you understand the various Microsoft Marketplace incentive programs available and how to strategically apply them to drive growth.
Register Here:
[Morning Session] – EMEA timezone[Afternoon Session] – AMEIRCAs timezone
Looking forward to see you there !
Hey amazing ISV community !We’ve got something awesome coming up, and I wanted to get you in the loop! On September 30th, we’re hosting a webinar all about Microsoft incentives for FY 2024-2025, and trust me, this is a must-attend for any ISV out there.Why this is going to be 🔥:For ISVs who aren’t transactable yet: This webinar is exactly what you need. We’re talking actionable steps to unlock revenue by getting onto the Marketplace. The final push you need to go from “thinking about it” to seeing real $$.For ISVs who are already transactable: It’s all about doubling down. We’ll show you how to take full advantage of Microsoft’s incentives and commit more to the Marketplace for even bigger returns.Webinar Details:Date: September 30thTime:Morning Session: 10:00 AM – 10:30 AM (CEST) [Link to register]Afternoon Session: 6:00 PM – 6:30 PM (CEST) [Link to register]Topic: Strategies for Leveraging Microsoft Incentives in FY 2024-2025This webinar is tailored to help you understand the various Microsoft Marketplace incentive programs available and how to strategically apply them to drive growth. Register Here:[Morning Session] – EMEA timezone[Afternoon Session] – AMEIRCAs timezoneLooking forward to see you there ! Read More
Excel not password protecting VBA page properly…
Whenever I try to hide sheets and then password protect them in the VBA page it never works. Whenever I go back into the VBA page it just gives me immediate and full access without prompting me for a password. Any ideas? I am using Excel 365.
Thanks
Whenever I try to hide sheets and then password protect them in the VBA page it never works. Whenever I go back into the VBA page it just gives me immediate and full access without prompting me for a password. Any ideas? I am using Excel 365. Thanks Read More
Favicon not updating on Bing – thingstoconsidertoday.com
Hello,
I manage a website thingstoconsidertoday.com. I am trying to update the favicon to our logo, but it does not display on Bing. It displays on Google just fine.
Any thoughts on how to push this favicon so that Bing updates?
Thanks,
Ryan
Hello, I manage a website thingstoconsidertoday.com. I am trying to update the favicon to our logo, but it does not display on Bing. It displays on Google just fine. Any thoughts on how to push this favicon so that Bing updates? Thanks,Ryan Read More
Need Help with Windows Server 2022 License Activation After VM Crash
Hello Community,
I’m facing an issue with my Windows Server 2022 license and would appreciate some guidance.
Details:
I purchased a Windows Server 2022 license and activated it on two VMs.Recently, both VMs crashed, and I’m now trying to activate the license on two new VMs.However, I am encountering an error stating that the activation limit has been exceeded.
I understand that each license has an activation limit, but in cases where servers or VMs crash, what are my options for reusing the license on new VMs? How can I resolve the activation error and ensure my license is properly applied to the new servers?
Any advice on the correct steps to take, or if there’s a way to reset the activation count or transfer the license, would be greatly appreciated.
Thank you in advance for your help!
Hello Community,I’m facing an issue with my Windows Server 2022 license and would appreciate some guidance.Details:I purchased a Windows Server 2022 license and activated it on two VMs.Recently, both VMs crashed, and I’m now trying to activate the license on two new VMs.However, I am encountering an error stating that the activation limit has been exceeded.I understand that each license has an activation limit, but in cases where servers or VMs crash, what are my options for reusing the license on new VMs? How can I resolve the activation error and ensure my license is properly applied to the new servers?Any advice on the correct steps to take, or if there’s a way to reset the activation count or transfer the license, would be greatly appreciated.Thank you in advance for your help! Read More
Help with migration concepts
Good morning to everyone.
I have a couple of questions that I hope you can help me resolve. These questions are related to Exchange Server 2013 (I know that this product is out of support and that is why we are migrating it to Exchange Server 2019).
This is my scenario:
I have a main site with 2 Exchange Server 2013 CU10 servers, I have a site with 1 Exchange Server 2013 CU23 server, I have another site with 1 Exchange Server 2013 CU23 server. The main idea is to migrate all mail servers to Exchange Server 2019.
These are the questions:
1. Is it possible to install a new server with Exchange Server 2019 on the main site without upgrading the remaining servers with Exchange Server 2013 to CU23?
2. What will happen to the mail flow after installing the new server with Exchange Server 2019? Will the main server with Exchange Server 2013 continue to manage the internal and external mail flow? Or will the new server with Exchange Server 2019 manage the mail flow?
3. In case the new server with Exchange Server 2019 is the one that manages the mail flow, is there a possibility that the server with Exchange Server 2013 will manage the mail flow until the migration is finished?
Thank you for your time and collaboration
Good morning to everyone.I have a couple of questions that I hope you can help me resolve. These questions are related to Exchange Server 2013 (I know that this product is out of support and that is why we are migrating it to Exchange Server 2019).This is my scenario:I have a main site with 2 Exchange Server 2013 CU10 servers, I have a site with 1 Exchange Server 2013 CU23 server, I have another site with 1 Exchange Server 2013 CU23 server. The main idea is to migrate all mail servers to Exchange Server 2019.These are the questions:1. Is it possible to install a new server with Exchange Server 2019 on the main site without upgrading the remaining servers with Exchange Server 2013 to CU23?2. What will happen to the mail flow after installing the new server with Exchange Server 2019? Will the main server with Exchange Server 2013 continue to manage the internal and external mail flow? Or will the new server with Exchange Server 2019 manage the mail flow?3. In case the new server with Exchange Server 2019 is the one that manages the mail flow, is there a possibility that the server with Exchange Server 2013 will manage the mail flow until the migration is finished?Thank you for your time and collaboration Read More
IIS Logs have Incorrect Date Modified
I have a server that is creating daily IIS logs (stored local) with a timestamp in the Date Modified that have the incorrect date for the “current” log.
Example: Today is 9-10-2024, the current log is named correctly(u_ex240910.log), has correct information inside, but the Date Modified Timestamp is 9-9-2024 7:00PM. There is also a log file for u_ex240909.log) which has correct information in it as well. I have dozens of IIS servers, and this is not an issue on the rest of them. The Logging feature in IIS Manager is setup identical on this issue server and working servers so I am stumped.
Screenshot of “problem” server.
Screenshot of “working” server:
Screenshot of Logging setup in IIS Manager(which is identical on both trouble and working servers):
I have a server that is creating daily IIS logs (stored local) with a timestamp in the Date Modified that have the incorrect date for the “current” log. Example: Today is 9-10-2024, the current log is named correctly(u_ex240910.log), has correct information inside, but the Date Modified Timestamp is 9-9-2024 7:00PM. There is also a log file for u_ex240909.log) which has correct information in it as well. I have dozens of IIS servers, and this is not an issue on the rest of them. The Logging feature in IIS Manager is setup identical on this issue server and working servers so I am stumped. Screenshot of “problem” server. Screenshot of “working” server: Screenshot of Logging setup in IIS Manager(which is identical on both trouble and working servers): Read More
Using a Calculated End Date in the Modern SharePoint Calendar View Drop-Down
Hello All: I recently created an end date as a calculated field in Microsoft SharePoint List. The calcuation’s data type returned is the date and time format. I want to use this new end date as an actual end date in my SharePoint Calendar view. Unfortunately, the end date does not appear in the calendar view drop down because it is not considered a “Date and Time” type. How do I convert my newly calculated end date to the Type “date and time” so that it will appear in the Calendar view end date drop down menu? This calculation works in the classic sharepoint , but renders no value in the modern SharePoint. I did see a similar request from Waqas in February 6, 2024. The response from Sophia Papadopoulos does not work. Also, the Microsoft Moderator offered a response that was not helpful. See below:”We went through your post carefully and do understand your great idea of importing the calculated end date as an actual date into a calendar to arrange tasks efficiently. But we are really sorry to convey that it seems like we also failed to achieve it from our tests. Given this situation, I sincerely recommend you use Feedback Community to suggest this feature limitation and add your valuable idea in the SharePoint Feedback Community (microsoft.com) which is the best place to share ideas directly with the product building team and improve the Microsoft Products. ” Does anyone have a work-around to this issue? Please let me know. It is amazing how the end date calucation is able to be picked up by the calendar view in the Classic version and not the Moderate version. I look forward to your help. Thank you. BW Read More
Task Start and Finish dates not in synch with Assignments view
When I allocate hours per task and resource in the Assignments view, the Start and Finish date of the task are set according to hour allocation – but only in the Assignments view!
When I go back to the Grid view and look at the start and finish dates for the tasks, they do not correspond to the start and finish dates which I see in the Assignments view.
Is there a way to fix this? I think the Assignments view is a great feature and I would love to use it for my project planning, but if the start and finish dates are not synchronized to the actual work allocation, this is a major drawback.
When I allocate hours per task and resource in the Assignments view, the Start and Finish date of the task are set according to hour allocation – but only in the Assignments view!When I go back to the Grid view and look at the start and finish dates for the tasks, they do not correspond to the start and finish dates which I see in the Assignments view.Is there a way to fix this? I think the Assignments view is a great feature and I would love to use it for my project planning, but if the start and finish dates are not synchronized to the actual work allocation, this is a major drawback. Read More
Improve end user resilience against QR code phishing
QR codes are gaining popularity as an easy way to access information for services and products. While QR codes are often used as convenient shortcuts, they can also be used by cybercriminals to trick users into accidentally scanning QR codes and expose themselves to risks. Understanding the dangers of QR codes, such as being redirected to fake websites or downloading malware, is crucial. Education enables users to check if QR codes are genuine, examine destination URLs, and use reliable apps for scanning. In the ongoing fight against phishing, informed end users become an important line of defense, preventing possible threats and strengthening their organization’s resilience.
Recently, we have observed a new trend in phishing campaigns that leverage QR codes embedded in emails to evade detection and trick users into visiting malicious links. To help our customers defend against this emerging threat, Microsoft Defender for Office 365 has introduced several enhancements to its prevention capabilities that can detect and block QR code-based attacks. Check out this blog to learn more about QR codes and how Defender for Office 365 is protecting end users against such attacks: Protect your organizations against QR code phishing with Defender for Office 365
We also introduced several enhancements to its investigation, hunting and response capabilities to help security teams to hunt and respond to such threats. Read more about these enhancements here: Hunting and responding to QR code-based phishing attacks with Defender for Office 365
In addition to prevention, detection, and investigation capabilities, we are excited to share that Microsoft Defender for Office 365 has also made several updates to its simulation and training features.
As part of the simulation enhancements, you will now be able to perform the following tasks:
Running a simulation with QR codes and tracking user response
Utilizing out of the box Global payloads and creating a custom payload with QR codes
Utilizing training content through video modules and how to guides
Running a simulation
There is no change in running a simulation. The current flow which involves selection of users, selection of payload, scheduling training, and notifications is also applicable for QR code-based simulations. Within simulations, you can select payloads with QR codes and use them for simulation.
Currently configuring payloads with QR codes and use of these payloads in a simulation is applicable to the Email platform and for the attack techniques below. Support for Teams platform and Link in Attachment, and attachment malware techniques will follow later.
Credential harvest
Link to malware
Drive by URL
OAuth consent grant
Given that QR codes are another vector for the phishing URL, the user events around read/delete/compromises/clicks remain the same—if a user is navigating to the URL after scanning the QR code, then it is tracked as a click event. The existing mechanisms for tracking compromise, deletes, and report events remain the same.
Global and Tenant Payloads
Global payloads
Our payload library now includes 75 payloads in five languages, addressing various real-world scenarios involving QR code attacks. These payloads can be found in the Content Library- Global Payloads, each beginning with QR code payloads (for example, QR code payloads: Prize Winner Notification). You can locate these by typing “QR” in the search bar.
Before implementing these payloads in your simulations, we advise examining their different fields and contents thoroughly.
Tenant payloads
You can create a custom payload by duplicating the existing global payloads or creating a payload from scratch. Within the payload editing experience, you can insert QR codes using Dynamic Tags (Insert QR code) or formatting controls (QR code icon). You have the options to select the size and position of the QR code.
The QR code that is generated will map to the phishing URL that is selected by you while configuring the payload in the payload wizard. When this payload is used in simulation, the service will replace the QR code with a dynamically generated QR code, to track click and compromise metrics. The size, position, and shape of the QR code would match the configuration of the QR set by you in the payload.
Training content
We have provided two mechanisms for learning about QR based attacks: How-to guides, and new training modules from our content partner.
How-to guides
How-to guides are designed to provide lightweight guidance to end users on how to report a phishing message directly through email. By delivering these guides directly to the end user’s inbox, we can ensure that the end user has the information they need to confidently report any suspicious emails.
You can filter for the How-to Guide through either:
Filtering by Technique = How-to Guide
Search by name = ” Teaching Guide: How to recognize and report QR phishing messages
Out-of-the-box trainings
Within the trainings list (Content Library- Training Modules), we have added a new training called Malicious Digital QR Codes, which is a short learning to educate on what to do when a user receives a QR code in the email. You can assign the training as part of a simulation or use training campaigns to assign the training to your users.
More information
More details around trainings are covered in this blog: Train your users to be more resilient against QR code phishing.
Review the documentation to learn more about the feature.
Note: As part of these changes, we will also be deprecating the alternative service, along with the GitHub repo.
Get started with attack simulation today.
Learn more about our latest features in Attack Simulation Training.
If you have other questions or feedback about Microsoft Defender for Office 365, engage with the community and Microsoft experts in the Defender for Office 365 forum.
Microsoft Tech Community – Latest Blogs –Read More
Inspektor Gadget is available in AzureLinux 3
Inspektor Gadget is a set of tools and a framework enabling observability of Kubernetes clusters and Linux hosts using eBPF.
You can use the framework to create your own tools, _i.e._ gadgets, which are packaged as OCI images, enabling you to easily share them with other users.
Inspektor Gadget handles the enrichment of low-level data, like disk I/O to higher level ones, like container names.
Azure Linux is an open source Linux distribution developed by Microsoft.
It is the predominant Linux distribution for first-party Microsoft services and is also available for customers via, among others, Azure Kubernetes Service (AKS).
Recently, the Azure Linux team officially released its version 3.
Starting with this version, Inspektor Gadget is available in the official repository and can be installed by simply calling `dnf`.
This is a big improvement, as previously users had to download the RPM package available in our release pages themselves before proceeding with the installation.
Let’s now deploy an Azure Linux 3 VM to install and use Inspektor Gadget, specifically the `trace exec` gadget to monitor the corresponding syscalls:
# Let’s set some variables we will use to deploy the Azure Linux VM.
you@home$ resource_group=’azure-linux-3′
you@home$ vm=’azure-linux-3-vm’
you@home$ admin=’testadmin’
you@home$ image=’MicrosoftCBLMariner:azure-linux-3:azure-linux-3:latest’
# Let’s now create the resource group and the VM inside it.
you@home$ az group create –name $resource_group –location westeurope
…
you@home$ az vm create –resource-group $resource_group –name $vm –image $image –admin-username ${admin} –generate-ssh-keys –security-type Standard
…
you@home$ ip=$(az vm show –resource-group $resource_group –name $vm -d –query ‘[privateIps]’ –output tsv)
# We can now connect to the VM through ssh.
you@home$ ssh $admin@$ip
testadmin@azure-linux-3-vm [ ~ ]$ cat /etc/os-release
NAME=”Microsoft Azure Linux”
VERSION=”3.0.20240727″
ID=azurelinux
VERSION_ID=”3.0″
PRETTY_NAME=”Microsoft Azure Linux 3.0″
ANSI_COLOR=”1;34″
HOME_URL=”https://aka.ms/azurelinux”
BUG_REPORT_URL=”https://aka.ms/azurelinux”
SUPPORT_URL=”https://aka.ms/azurelinux”
# Let’s install ig!
testadmin@azure-linux-3-vm [ ~ ]$ sudo dnf install -y ig
Last metadata expiration check: 0:03:01 ago on Thu Aug 22 08:31:41 2024.
Dependencies resolved.
=========================================================================================================================================
Package Architecture Version Repository Size
=========================================================================================================================================
Installing:
ig x86_64 0.30.0-1.azl3 azurelinux-official-base 18 M
Transaction Summary
=========================================================================================================================================
Install 1 Package
Total download size: 18 M
Installed size: 69 M
Downloading Packages:
ig-0.30.0-1.azl3.x86_64.rpm 3.2 MB/s | 18 MB 00:05
—————————————————————————————————————————————–
Total 3.2 MB/s | 18 MB 00:05
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : ig-0.30.0-1.azl3.x86_64 1/1
Installed:
ig-0.30.0-1.azl3.x86_64
Complete!
testadmin@azure-linux-3-vm [ ~ ]$ ig version
v30.0.0
# Let’s run a simple loop spawning some processes.
testadmin@azure-linux-3-vm [ ~ ]$ while true; do date > /dev/null; sleep 1; done &
[1] 2035
# Let’s trace the exec syscall with the corresponding ig tool.
testadmin@azure-linux-3-vm [ ~ ]$ sudo ig trace exec –host
RUNTIME.CONTAINERNAME PID PPID COMM PCOMM RET ARGS
2127 2035 date bash 0 /usr/bin/date
2128 2035 sleep bash 0 /usr/bin/sleep 1
2129 2035 date bash 0 /usr/bin/date
2130 2035 sleep bash 0 /usr/bin/sleep 1
^C
testadmin@azure-linux-3-vm [ ~ ]$ kill 2035
As you can see, ig was able to report the exec() syscalls done to run date and sleep!
This way, you can use the tool to diagnose and troubleshoot AzureLinux host processes as well as processes running in containers!
This work would not have been possible without the help from the AzureLinux team, particularly Christopher Co and Muhammad Falak R. Wani.
We thank them for making it possible!
Microsoft Tech Community – Latest Blogs –Read More
Introducing the marketplace value calculator
We’re delighted to announce that the marketplace value calculator, on the Marketplace Rewards Toolbox, is now available to use in 13 languages worldwide!
What is the marketplace value calculator?
The marketplace value calculator is a simple and quick way to see exactly how many direct value benefits, cloud credits, and incentives your business can unlock with the Microsoft commercial marketplace. After plugging in just a few projections, you’ll be able to see (and share with your colleagues) how these benefits, cash, and free products outweigh the small marketplace fees.
This will help you, and others at your company, see the costs and benefits associated with building, launching, and selling with Microsoft. This calculator is available to anyone to see if working with Microsoft makes sense.
How it works:
The calculator, found on the Marketplace Rewards Toolbox here, is simple by design.
Start by choosing which benefit your company uses (ISV Success packages) and enter a few pertinent numbers. Clicking calculate will show you a summary snapshot of the total cash, product, and product credit value your business will get, as well as the costs associated with working with Microsoft.
Clicking “Read full report” allows you to see a more detailed report that shows year-by-year calculations, including other qualitative benefits with links to learn more about how to qualify and apply.
You can easily share this report with other decision makers in your business by selecting “Share” and copying the URL. If you want to change your projections to see how the benefits change, you can recalculate at any time.
A GIF of how easy it is to use the marketplace value calculator. To try it out for yourself go to Marketplace Rewards (microsoft.com) and choose “Marketplace value calculator”
What you’ll get when you use the calculator:
We believe the data-driven business case of working with Microsoft should be very clear and—with this calculator—we aim to provide clarity with a small time investment. With simple-to-understand costs and benefits, the business case will be clearer, and you’ll have more time to work on your business, not your decision making.
We look forward to you using it.
Go use the calculator here: Marketplace Rewards (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Enable strong name-based mapping in government scenarios
If you work in smartcard federated authentication environments, here’s a much-anticipated security feature for you. Starting with the September 10, 2024 Windows security update, you can use strong name-based mapping on Windows Server 2019 and newer. This feature helps you with the hardening changes for certificate-based authentication on Windows domain controllers.
What are weak and strong mappings in Active Directory?
All certificate names must be correctly mapped onto the intended user account in Active Directory (AD). If there’s a likelihood that they aren’t, we call these mappings weak. Weak mappings give rise to security vulnerabilities and demand hardening measures such as Certificate-based authentication changes on Windows domain controllers.
Following up on our May 2022 round of updates to address these vulnerabilities, we’re introducing a new feature called strong name-based mapping. You can now distinguish between “strong” and “weak” mappings within existing Alternative Security Identities (AltSecIDs) based on likelihood. With the new feature, you can allow some weak name-based mappings to be treated as strong name-based mappings. You just need to properly configure both the public key infrastructure (PKI) and the AD deployment.
Key features and benefits of strong name-based mapping
Strong name-based mapping has two main benefits:
Compliance with strong certificate mapping enforcement. Strong name-based mapping allows certain weak certificate mappings, such as Issuer/Subject AltSecID and User Principal Names (UPN) mappings, to be treated as strong mappings. This type of strong mapping is compatible with the enforcement mode of certificate-based authentication changes on Windows domain controllers.
Compatibility with government PKI deployments. Strong name-based mappings work by asking PKI deployments to attest certain security guarantees of certificates via object identifiers (OIDs) stamped on the certificate. It’s a common practice among government PKI and AD deployments.
Security requirements for PKI deployments for strong name-based mapping
Warning
Unless you have a strong need for this type of deployment AND have a deep knowledge of how PKI deployments and AD authentication interact together, we DO NOT recommend deploying strong name-based mapping. We instead recommend that you following the guidance in KB5014754: Certificate-based authentication changes on Windows domain controllers.
Fundamentally, strong name-based mapping deployment is your promise to Microsoft that your PKI is not susceptible to the attacks addressed by May 2022 and later updates. Namely, you take responsibility for the vulnerabilities that can arise from any unintentional mapping of the names in a certificate to multiple AD accounts.
To prevent unintentional and unsafe mappings, we recommend that you take steps to strengthen your PKI and AD deployments. Some of these steps include:
Names used in either the Subject Name and/or the Subject Alternative Name of certificates MUST NOT contain names that are queried and/or built from AD.
Names used in either the Subject Name and/or the Subject Alternative Name of certificates MUST be both immutable and globally unique to the entire PKI deployment.
AD and PKI administrators must ensure that certificate issuance for logons is not automatic. Instead, ensure that strong manual checks are in place to prevent a certificate with an incorrect or clashing name from being issued.
Failing to secure your PKI and AD deployments can degrade the security of your environment.
If your PKI meets or exceeds these security requirements, you MUST add an OID in the Issuance Policy of the certificate to denote this compliance. This OID (or multiple OIDs) will be used further below in the strong name-based mapping configuration.
Setup instructions
To enable strong name-based mapping on Windows Server 2019 and later, you need to take the following steps:
Enable the Group Policy (GPO) Setting on the Domain Controllers:
Computer Configuration > Administrative Template > System > KDC > “Allow name-based strong mappings for certificates”.
Configure the GPO with the necessary tuples (more details below).
This configuration relies on adding tuples to the GPO when strong name-based mapping is enabled. These tuples tell the Domain Controller which certificates meet the above security requirements by specifying both the Issuer certificate authority (CA) thumbprint and the OID(s) that denote that the PKI deployment is secured against the May 2022 vulnerabilities. Furthermore, the tuples also configure which “weak” name-based mappings can be upgraded to “strong” name-based mappings.
The tuple is in the following format:
<Issuer CA Certificate Thumbprint>;<OID(s)>;<IssuerSubject/UpnSuffix=()>
Issuer CA Certificate Thumbprint: This is the certificate thumbprint of the Issuing CA. There can only be one Issuer CA Thumbprint in this field. If multiple Issuer CA Thumbprints are placed, it can prevent proper processing of the GPO policy.
OID(s): This is a comma-separated list of OIDs that the PKI deployment has stamped on the certificate to attest that the security requirements against name collisions have been met. There can be multiple OIDs denoted in this field.
IssuerSubject/UpnSuffix: This is a comma-separated list to denote what type of weak mapping should be treated as strong:
IssuerSubject: This string behaves as a tag to denote that the Issuer/SubjectName AltSecID can be upgraded from “weak” to “strong.” There can only be one IssuerSubject tag in this field.
UPNSuffix: This string denotes that certificate mappings can be upgraded form “weak” to “strong” wherever the UPN suffix of the SubjectName (that is, everything that comes after the @ symbol) matches the suffix in the tuple exactly. There can be multiple UPN suffixes in this field.
The logic of the tuple is the following. For certificates whose Issuer is X that has any of the OID(s) Y, upgrade any of the weak mappings C to “strong.” This logic is summarized in the diagram.
Two important configuration details are required for UPN Suffix mapping to work:
Certificates must have the UPN of the user in the SAN.
Mapping via UPNs has not been disabled via UseSubjectAltName.
How to use and understand policy tuples: a walkthrough
Policy tuple example 1
Use this policy tuple to allow a strong mapping via Issuer/SubjectName AltSecID.
fe40a3146d935dc248504d2dcd960d15c4542e6e; 2.16.840.1.101.3.2.1.3.45;IssuerSubject
For certificates whose Issuer Certificate Thumbprint is fe40a3146d935dc248504d2dcd960d15c4542e6e, and
The certificate has the OID 2.16.840.1.101.3.2.1.3.45,
Allow a strong mapping if the certificate is mapped via Issuer/SubjectName AltSecID.
This tuple would allow a certificate logon which passes checks (1) and (2) issued to the user Bob, if the AD object for Bob has the Issuer/SubjectName AltSecID correctly configured for the certificate.
Policy tuple example 2
Use this policy tuple to allow a strong mapping via a specified UPNSuffix.
fe40a3146d935dc248504d2dcd960d15c4542e6e; 2.16.840.1.101.3.2.1.3.45;UPNSuffix=corp.contoso.com
For certificates whose Issuer Certificate Thumbprint is fe40a3146d935dc248504d2dcd960d15c4542e6e, and
The certificate has the OID 2.16.840.1.101.3.2.1.3.45,
Allow a strong mapping if the certificate is mapped via UPNSuffix, which should be “corp.contoso.com.”
This tuple would allow a certificate logon which passes checks (1) and (2) issued to the user Bob, if the AD object for Bob has the Issuer/SubjectName AltSecID correctly configured for the certificate.
Policy tuple example 3
Use this policy tuple to allow a strong mapping via any of the approved specifications.
fe40a3146d935dc248504d2dcd960d15c4542e6e; 2.16.840.1.101.3.2.1.3.45, 2.16.840.1.101.3.2.1.3.44;UPNSuffix=corp.contoso.com,UPNSuffix=my.corp.contoso.com,IssuerSubject
For certificates whose Issuer Certificate Thumbprint is fe40a3146d935dc248504d2dcd960d15c4542e6e, and
The certificate has ANY of the following OIDs:
2.16.840.1.101.3.2.1.3.45
2.16.840.1.101.3.2.1.3.44
Allow a strong name-based mapping if the certificate is mapped via either of the following:
The user account in AD has a valid Issuer/SubjectName AltSecID mapping
UPNSuffix, where the suffix is “corp.contoso.com”
UPNSuffix, where the suffix is “my.corp.contoso.com”
Event Log changes
Two Event Log updates are here to help you as an AD administrator better troubleshoot strong name-based mapping scenarios. These are available to you with the September 10, 2024 and later updates.
Updates to current event logs
The current event logs now include policy OIDs found on the certificate used for authentication. This modifies the Key Distribution Center (KDC) events introduced by the May 10, 2022 and later updates.
New event logs
Additionally, a new event is available to log when the strong name-based mapping GPO encounters an issue processing the policy tuples. Track these events through Event ID 311.
Event Log
Microsoft-Windows-Kerberos-Key-Distribution-Center/Operational
Event Type
Error
Event Source
Kerberos-Key-Distribution-Center
Event ID
311
Event Text
The Key Distribution Center (KDC) encountered invalid certificate strong name match policy.
Faulting line: <line number>
Ready to improve Windows Server security?
We’re excited to bring this feature to your government scenario. Consider strong name-based mappings on Active Directory and PKI deployments in Windows Server 2019 or later if you meet the security requirements and recommendations. If you have any questions or need assistance, our support team is here to help.
Continue the conversation. Find best practices. Bookmark the Public Sector Tech Community, then follow us on the Public Sector Blog for updates.
Microsoft Tech Community – Latest Blogs –Read More
SIEM Migration Update: Now Migrate with contextual depth in translations with Microsoft Sentinel!
What’s new in SIEM Migration?
The process of moving from Splunk to Microsoft Sentinel via the SIEM Migration experience has been enhanced with three key additions that help customers get more context aware translations of their detections from Splunk to Sentinel. These features let customers provide more contextual details about their Splunk environment & usage to the Microsoft Sentinel SIEM Migration translation engine so it can account for them when converting the detections from SPL to KQL. These are:
Schema Mapping
Support for Splunk Macros in translation
Support for Splunk Lookups in translation
Let talk about how these can make life easier when migrating to Microsoft Sentinel via the SIEM Migration experience:
Schema Mappings
How does it help?
Most traditional translation tools only factor in Grammar translations when translating from one query language to another. More precisely addressing the “how” in the queries – How are these queries structured? How are operational and computational logics defined? Among other things.
The What is often lost is translation. “What data sources are being queried”? “What do these data sources really map to in the target SIEM”?
The “what” is often environmental customer context that needs to be accounted for in translation to ensure that grammar translations are applied on the right sources.
The Approach
Schema mappings in the SIEM Migration experience allows you to precisely define how Splunk sources (indexes, data models, etc.) map to Microsoft Sentinel tables within the new “Schema mapping” section of the UI Experience. This feature provides the flexibility and customization to ensure that your data is aligned with your migration needs. On uploading the Splunk export, the system extracts all the sources from the SPL queries. Known sources such as Splunk CIM schemas & data models are auto mapped to ASIM Schemas as applicable. The other custom sources queried in the detections are listed without being mapped and these will require manual mapping with existing Microsoft Sentinel/Azure Log Analytics tables. All mappings can then be reviewed, modified or new sources added. Mapping schemas is hierarchical, i.e., the Splunk sources map 1-1 with Sentinel tables in addition to the fields within these sources.
The best part? The manual changes to schema mapping are saved per workspace so that you do not have to repeat it again.
Step-by-Step usage guidance
To leverage Schema Mappings,
Navigate to the SIEM Migration experience from the Microsoft Sentinel Content Hub.
Review prerequisites and click “Next: Upload File”.
Export the inventory of Splunk detections by following the instructions on the screen and once exported, upload to Sentinel. Click “Next: Schema Mapping (Preview)”
Review the Splunk data sources identified from the export process. To review the field mappings within a data source, select the Splunk source which will open a side panel on the right that has the field mappings.
Review, Modify, Add schema mappings
Data Source Mappings: To edit the Sentinel table that the Splunk source is mapped to select the Sentinel Table from the Sentinel Table dropdown.
Field Mappings: To edit field mappings, look for the Splunk field on the left that you wish to change the mapping for and then for this Splunk field, select the corresponding Sentinel field from the dropdown.
Add new Schema Mappings: In a scenario where you do not find the Splunk source identified & listed in the list of data sources, click on “+ Add source”. Now in the right-side panel, continue adding the name the of your Splunk data source and select a Sentinel table from the dropdown menu. Click “+Add mapping” to continue adding field mappings by entering the Splunk field name manually on the left and selecting the corresponding Sentinel field name on the right.
Once the changes have been completed, click on “Save Changes”. Note that the Mapping state now changes to “Manually Mapped”.
Once the Schema Mappings are complete, the changes made are taken into account when the SPL saved searches are translated to KQL queries.
Translation support for Splunk Lookups
Splunk Lookups, like Sentinel Watchlists are lists with field-value combinations that can be queried/correlated against ingested data. The SIEM Migration experience addresses the translation & use of Splunk lookups in SPL queries (in Splunk detections) to Sentinel Watchlists’ use in the KQL queries generated.
Note: Sentinel Watchlists must be created as a pre-requisite to allow mapping these Sentinel Watchlists with Splunk Lookups when you start migrating.
The Approach
Splunk lookups as a complete data set are defined and are available outside the bounds of the SPL query and the SPL query only references the lookups invoking it with the “lookup”, “inputlookup” and/or “outputlookup” keywords. The translation support is only available for the “lookup” & “inputlookup” keywords where lookup data can be queried/correlated against. The “outputlookup” operation – where data is written to a lookup – is not supported in translation but can be achieved by defining an Automation Rule in Microsoft Sentinel.
For translating the invocation of lookups, SIEM Migration’s translation engine uses the “_GetWatchlist()” KQL function to allow mapping to the correct Sentinel watchlist, supplemented in operation by other KQL functions to translate the complete logic.
Step-by-Step usage guidance
To ensure the correct Splunk Lookup à Sentinel Watchlist mapping, its important for the SIEM Migration experience to have this mapping context. The experience now allows for customers to be able to map their Splunk lookups (automatically identified from the Splunk queries uploaded) to Sentinel Watchlists (Created outside the experience as a pre-requisite).
Follow the guidance here to create Sentinel Watchlists.
Once the Watchlists are created, follow the guidance below to map these Sentinel Watchlists to Splunk lookups:
Navigate to the SIEM Migration experience from the Microsoft Sentinel Content Hub.
Review prerequisites and click “Next: Upload File”.
Export the inventory of Splunk detections by following the instructions on the screen and once exported, upload to Sentinel. Click “Next: Schema Mapping (Preview)”.
Click on the “Lookups” tab and start reviewing/mapping the lookups.
To add field mappings, click on the Splunk Lookup that needs to be mapped and on the right-side panel that opens, select the corresponding Sentinel Watchlist on the right-hand side.
Once the Sentinel Watchlist is selected, the field mappings can be completed by selecting the Watchlist field from the field dropdown corresponding to the Lookup field on the left.
On completing the review, click “Save Changes”. Note that the Mapping state now changes to “Manually Mapped”.
Once all Splunk lookups have been reviewed, click on “Next: Configure Rules” to start translations to KQL.
NOTE: When a Splunk lookup does not have a corresponding Sentinel Watchlist mapped, the translation engine keeps the same name for both the Sentinel Watchlist and its fields as the Splunk lookup and fields.
Translation support for Splunk Macros
How does this help?
A core tenet of developers is automation and functionality reuse. Macros are integral for quick development, but every architect silently curses these “shortcuts” when having to migrate to a different tech stack.
When upgrading the SIEM migration experience the team thought: What if someone told the architect “Hey, we got this covered”. All (SPL) detection queries will seamlessly be expanded by making inline replacements of the macro references by the respective macro definitions and passed on to the translation engine to ensure the core detection logic stays retained when the language translations happens.
The Approach
To enable this Macro expansion the experience needs more context and data. Be to context for the data field mapping or the Splunk code associated with the macros. This enrichment is done via the initial file query and uploader which now has a richer query to pull the necessary information – The metadata of the detections and in addition, all macro definitions. This extra information helps identify and ensure all pieces of the puzzle are in the right place before translation.
Step-By-Step Usage Guidance
Do not worry, there are no extra steps here. 🙂
The experience: “Copy the query & run it on Splunk” to obtain the import file necessary for migration remains the same. As mentioned earlier the query has been enhanced to get a broader context with an updated format.
There are no extra touchpoints. The migration experience will take care of the rest and show you the expanded source query with macros references replaced in-line with the respective definitions in the “Configure Rules” tab.
Microsoft Tech Community – Latest Blogs –Read More