Month: September 2024
How Do I Convert WebP to PNG on Windows 11?
Hi,
I really need some help in here as I just upgraded my PC to Windows 11 from Windows 10. I have more than 10 .webp images downloaded from web and currently look for a way to bulk convert webp to png so I can edit and share them with others.
Does Windows 11 comes with a WebP to PNG Converter? If yes, could you kindly let me know. In addition, it could be better to keep the quality after conversion.
Thank you
Hi,I really need some help in here as I just upgraded my PC to Windows 11 from Windows 10. I have more than 10 .webp images downloaded from web and currently look for a way to bulk convert webp to png so I can edit and share them with others. Does Windows 11 comes with a WebP to PNG Converter? If yes, could you kindly let me know. In addition, it could be better to keep the quality after conversion. Thank you Read More
Is possible to configure in a teams group conversation the repository of files of that conversation?
Hello:
I need to be able to configure a specific repository files where people of teams conversation group can save files. Only for that conversation.I describe the scenario:
I need to create, dynamically, different group conversations depending on what theme is discussed, i will add different members to every conversation.
I will create a new specific folder (each time) in a teams channel group where only all conversation members and channel members will have all permissions.
Is it possible to configure the repository of files for each conversation so that it points to the folder created and shared in teams channel? (see image attached)
If it’s possible, I need to know which are the commands or functions that permits that.
I would like to use them in powerapps program.
Thank you very much.
Hello: I need to be able to configure a specific repository files where people of teams conversation group can save files. Only for that conversation.I describe the scenario: I need to create, dynamically, different group conversations depending on what theme is discussed, i will add different members to every conversation.I will create a new specific folder (each time) in a teams channel group where only all conversation members and channel members will have all permissions.Is it possible to configure the repository of files for each conversation so that it points to the folder created and shared in teams channel? (see image attached) If it’s possible, I need to know which are the commands or functions that permits that.I would like to use them in powerapps program. Thank you very much. Read More
Optimizing Models: Fine-Tuning, RAG and Application Strategies
Before diving in, let’s take a moment to review the key resources and foundational concepts that will guide us through this blog. That will ensure we’re well-equipped to follow along. This brief review will provide a strong starting point for exploring the main topics ahead.
Microsoft Azure: Microsoft offers a cloud computing platform and a suite of cloud services. It provides a wide range of cloud-based
services and solutions that enable organizations to build, deploy, and manage applications and services through Microsoft’s global network of data centers.
AI Studio: a platform that helps you evaluate model responses and orchestrate prompt application components with prompt flow for better performance. The platform facilitates scalability for transforming proof of concepts into full-fledged production with ease, continuous monitoring and refinement support long-term success.
Fine-tuning: is the process of retraining pretrained models on specific datasets. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn’t well represented when you originally trained the base model.
Retrieval Augmented Generation (RAG): is a pattern that works with pretrained large language models (LLM) and your own data to generate responses. In Azure Machine Learning, you can implement RAG in a prompt flow.
Our hands-on learning will be developing an AI-based solution that helps the user extract financial information and insights from investment/finance books and newspaper in our database.
The process is divided into three main parts:
Fine-tune a base model with financial data to help the model provide more specific responses and be grounded and rooted with data related to finance and investment.
Implement RAG so that the response won’t be only based on the data it was trained with (fine-tuned with) but also based on other data sources (the user’s input in our case).
Integration of the deployed model into a web app so that it could be used through a user interface.
1- Setup:
Create a resource group which is defined as a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.
You need to specify your subscription, a unique resource group name, and the region.
Create an Azure OpenAI resource: Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including GPT-4o, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, image understanding, semantic search, and natural language to code translation
– Create a text embedding model: the embedding is an information-dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format.
Create an AI search resource: Azure AI Search (“Azure Cognitive Search” previously) provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications. Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include data exploration, and increasingly feeding query results to prompts based on your proprietary grounding data for conversational search as we will do in our example.
Create a storage account: it contains all your Azure Storage data objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS.
– Create a blob container: blob Storage is Microsoft’s object optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data. it will be used to store your data.
Navigate to your storage resource -> Click on Storage browser tab on the left -> Click Blob Containers -> Click on + add container then Upload your data. Our data was pdf files (books and newspapers) and csv files from Kaggle, all are related to finance and investment.
Create a search Index: is your searchable content, available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. Check that the status of your AI search is “Running”.
– Import and vectorize data: integrated vectorization is an extension of the indexing and query pipelines in Azure AI Search. It adds the following capabilities: Data chunking (split the data into smaller, manageable pieces) during indexing. Text-to-vector conversion during indexing.
Navigate to you AI search service -> Click on Indexes tab on the left -> Click on “Import and Vectorize data” -> Select your text embedding model previously deployed.
2- Fine Tune a Base Model
Preparing Dataset: before starting fine tuning, we need to prepare our training and validation data. Which could be collected manually or generated from text files or tabular data. However, it is important that the data confirm the following template:
{“messages”: [{“role”: “system”, “content”: “content goes here”}, {“role”: “user”, “content”: “query goes here?”}, {“role”: “assistant”, “content”: “response goes here.”}]}.
To satisfy this, we prepared our two data sets training_data.jsonl and val_data.jsonl for training and validation, respectively.
Below is an example of our training_data.jsonl:
{“messages”: [{“role”: “system”, “content”: “You are a helper in finance and an advisor in investment”}, {“role”: “user”, “content”: “What makes a successful startup?”}, {“role”: “assistant”, “content”: “Luck, hard work, and maybe skills… mostly luck, hard work, and consistency.”}]}
Both data files are attached to this blog. They were collected manually through some examples.
Evaluate data to ensure its quality, check number of tokens and its distribution.
import json
import tiktoken
import numpy as np
from collections import defaultdict
encoding = tiktoken.get_encoding(“cl100k_base”)
def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):
num_tokens = 0
for message in messages:
if not isinstance(message, dict):
print(f”Unexpected message format: {message}”)
continue
num_tokens += tokens_per_message
for key, value in message.items():
if not isinstance(value, str):
print(f”Unexpected value type for key ‘{key}’: {value}”)
continue
num_tokens += len(encoding.encode(value))
if key == “name”:
num_tokens += tokens_per_name
num_tokens += 3
return num_tokens
def num_assistant_tokens_from_messages(messages):
num_tokens = 0
for message in messages:
if not isinstance(message, dict):
print(f”Unexpected message format: {message}”)
continue
if message.get(“role”) == “assistant”:
content = message.get(“content”, “”)
if not isinstance(content, str):
print(f”Unexpected content type: {content}”)
continue
num_tokens += len(encoding.encode(content))
return num_tokens
def print_distribution(values, name):
if values:
print(f”n#### Distribution of {name}:”)
print(f”min / max: {min(values)}, {max(values)}”)
print(f”mean / median: {np.mean(values)}, {np.median(values)}”)
print(f”p5 / p95: {np.quantile(values, 0.05)}, {np.quantile(values, 0.95)}”)
else:
print(f”No values to display for {name}”)
files = [
r’train_data.jsonl’,
r’val_data.jsonl’
]
for file in files:
print(f”Processing file: {file}”)
try:
with open(file, ‘r’, encoding=’utf-8′) as f:
total_tokens = []
assistant_tokens = []
for line in f:
try:
ex = json.loads(line)
messages = ex.get(“messages”, [])
if not isinstance(messages, list):
raise ValueError(“The ‘messages’ field should be a list.”)
total_tokens.append(num_tokens_from_messages(messages))
assistant_tokens.append(num_assistant_tokens_from_messages(messages))
except json.JSONDecodeError:
print(f”Error decoding JSON line: {line}”)
except ValueError as ve:
print(f”ValueError: {ve} – line: {line}”)
except Exception as e:
print(f”Unexpected error processing line: {e} – line: {line}”)
if total_tokens and assistant_tokens:
print_distribution(total_tokens, “total tokens”)
print_distribution(assistant_tokens, “assistant tokens”)
else:
print(“No valid data to process.”)
print(‘*’ * 50)
except FileNotFoundError:
print(f”File not found: {file}”)
except Exception as e:
print(f”An unexpected error occurred: {e}”)
Login to AI Studio
Navigate to the Fine-tuning tab
Check the available models for fine-tuning within your region.
Upload your training and validation data
Since we have our data locally, we uploaded them. In case you want to save your data in the cloud and use the URL for later in place of the “Uploading files” option, you can use SDK and follow this code:
# Initialize AzureOpenAI client
client = AzureOpenAI(
azure_endpoint=azure_oai_endpoint,
api_key=azure_oai_key,
api_version=version # Ensure this API version is correct
)
training_file_name = r’path’
validation_file_name = r’path’
try:
# Upload the training dataset file
with open(training_file_name, “rb”) as file:
training_response = client.files.create(
file=file, purpose=”fine-tune”
)
training_file_id = training_response.id
print(“Training file ID:”, training_file_id)
except Exception as e:
print(f”Error uploading training file: {e}”)
try:
# Upload the validation dataset file
with open(validation_file_name, “rb”) as file:
validation_response = client.files.create(
file=file, purpose=”fine-tune”
)
validation_file_id = validation_response.id
print(“Validation file ID:”, validation_file_id)
except Exception as e:
print(f”Error uploading validation file: {e}”)
You can specify the hyperparameters such as batch size, or leave them with default values.
Review the settings before submitting
Check the status of the fine-tuning in your dashboard, changing from Queued to Running to Completed.
Once completed, your fine-tuned model is ready to be deployed. Click on ‘Deploy’
After successful deployment, you can go back to Azure Open AI and find your fine-tuned model deployed along with your previous text embedding model.
3- Integration into Web App
The concept here is to rely on the model’s knowledge + users’ documentation. We have two options and both provide high precision for responses:
Look for the answer in the documents, and if not found, return a response based on the internal knowledge of the model.
Combine the two responses from the retriever and the model. Which is the one we opt for here.
Also, for integration, we have two ways we may follow: through the Azure OpenAI User Interface and deploying into an Azure static web app or develop your own web app and use the Azure SDK to integrate your model.
1- Deploying into Azure static web app
Click on “Open in Playground” below your deployments list in Azure open AI
Click “Add your data”
Choose your Azure blob storage as data source à Choose Index name “myindex”
Customize the system message to “You are a financial advisor and an expert in investment. You have access to a wide variety of documents. Use your own knowledge to answer the question and verify it or supplement it using the relevant documents when possible.” This system message will enable the model not only to rely on documents but also rely on its internal knowledge.
Complete the setup and click on “Apply changes”
Deploy to a new web app and configure the web app name, subscription, resource group, location, and pricing plan.
2- Develop your own web App and use Azure SDK
Prepare your environment
load_dotenv ()
azure_oai_endpoint = os.getenv(“AZURE_OAI_FINETUNE_ENDPOINT2”)
azure_oai_key = os.getenv(“AZURE_OAI_FINETUNE_KEY2”)
azure_oai_deployment = os.getenv(“AZURE_OAI_FINETUNE_DEPLOYMENT2”)
azure_search_endpoint = os.getenv(“AZURE_SEARCH_ENDPOINT”)
azure_search_key = os.getenv(“AZURE_SEARCH_KEY”)
azure_search_index = os.getenv(“AZURE_SEARCH_INDEX”)
Initialize your AzureOpenAI client
client = AzureOpenAI(
base_url=f”{azure_oai_endpoint}/openai/deployments/{azure_oai_deployment}/extensions”,
api_key=azure_oai_key,
api_version=”2023-09-01-preview)
Configure your data source for Azure AI search. This will retrieve response from our stored files.
extension_config = dict(
dataSources= [
{
“type”: “AzureCognitiveSearch”,
“parameters”: {
“endpoint”: azure_search_endpoint,
“key”: azure_search_key,
“indexName”: azure_search_index,
}
}
]
)
RAG is used to enhance a model’s capabilities by adding more grounded information, not to eliminate the model’s internal knowledge.
RAG is used to enhance a model’s capabilities by adding more grounded information, not to eliminate the model’s internal knowledge.
Some issues that you may face during development:
Issue 1: make sure to verify the OpenAI version. You can pin the version to openai=0.28 or upgrade it and follow migration steps.
Issue 2: you may run out of quota and be asked to wait for 24 hours till the next try. Make sure to always have enough quota in your subscription.
Issue 1: make sure to verify the OpenAI version. You can pin the version to openai=0.28 or upgrade it and follow migration steps.
Issue 2: you may run out of quota and be asked to wait for 24 hours till the next try. Make sure to always have enough quota in your subscription.
Next, you can look at how to do real-time injection so that you personalize more of the responses. Try to find how to rely between your web app, the user’s input I/O, the searching index, and LLM.
Keyword: Langchain, Databricks
Resources:
what-is-azure-used-for.
What is Azure AI Studio? – Azure AI Studio | Microsoft Learn.
Fine-tuning in Azure AI Studio – Azure AI Studio | Microsoft Learn.
machine-learning/concept-retrieval-augmented-generation.
Manage resource groups – Azure portal – Azure Resource Manager | Microsoft Learn.
What is Azure OpenAI Service? – Azure AI services | Microsoft Learn
Introduction to Azure AI Search – Azure AI Search | Microsoft Learn
storage-account-create
Introduction to Blob (object) Storage – Azure Storage | Microsoft Learn
How to generate embeddings with Azure OpenAI Service – Azure OpenAI | Microsoft Learn.
Azure OpenAI Service models – Azure OpenAI | Microsoft Learn
Search index overview – Azure AI Search | Microsoft Learn
Integrated vectorization – Azure AI Search | Microsoft Learn
Easy Guide to Transitioning from OpenAI to Azure OpenAI: Step-by-Step Process
LangChain on Azure Databricks for LLM development – Azure Databricks | Microsoft Learn
Build a RAG-based copilot solution with your own data using Azure AI Studio – Training | Microsoft Learn
RAG and generative AI – Azure AI Search | Microsoft Learn
Retrieval augmented generation in Azure AI Studio – Azure AI Studio | Microsoft Learn
Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview) – Azure Machine Learning | Microsoft Learn
Retrieval-Augmented Generation (RAG) with Azure AI Document Intelligence – Azure AI services | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Certifícate con Learn Live GitHub Universe en Español
GitHub Universe se acerca! Microsoft y GitHub se han unido para ofrecer una nueva serie especial de Learn Live en inglés y español: GitHub Universe 2024. Del 10 al 24 de Octubre, aprenderás a aprovechar al máximo GitHub Copilot, automatizar con GitHub Actions para crear sitios web y API, y mucho más. También recibirás un cupón de descuento para tomar una certificación de GitHub por $35 USD (el precio regular es $99 USD). Regístrate ahora!
*Oferta válida durante 48 horas después de una sesión. Límite de un cupón de descuento de GitHub por persona. Esta oferta no es transferible y no se puede combinar con ninguna otra oferta. Esta oferta finaliza 48 horas después de una sesión y no se puede canjear por dinero en efectivo. Los impuestos, si los hubiera, son responsabilidad exclusiva del destinatario. Microsoft se reserva el derecho de cancelar, cambiar o suspender esta oferta en cualquier momento sin previo aviso.
Learn Live GitHub Universe
Ya sea que estés comenzando o buscando mejorar tus habilidades, este es un evento imperdible para cualquier persona interesada en crecer en su carrera en tecnología. Todas nuestras sesiones se llevarán a cabo en horario de la Ciudad de México (GMT-6). REGÍSTRATE AQUÍ: Learn Live GitHub Universe 2024!
10 de octubre, 5:00 pm GMT-6
Mexico City time
Crea READMEs impresionantes con Markdown
Aprende como usar Markdown, un lenguaje de marcado super util que te permitira crear contenido impactante y resaltar tu repositorio en GitHub.
Oct 17th, 5:00 pm GMT-6
Mexico City time
Construye un sitio web con GitHub Copilot
¡Construye una API web con Python usando las últimas tecnologias de GitHub como Codespaces y GitHub Copilot. Crea un ejemplo de repositorio que puedes usar como parte de tu portafolio en tu cuenta usando las mejores guias y ejemplos de expertos.
Oct 24th, 5:00 pm GMT-6
Mexico City time
Automatiza tu repositorio con GitHub Actions
Usa GitHub Actions para crear automatizacion y evitar tareas manuales o repetitivas en tu proyecto! En esta sesión, ganarás habilidades necesarias para implementar automatización usando GitHub Actions en un repositorio de código.
Si deseas unirte a la serie en ingles, por favor, visita nuestro sitio web de Microsoft Reactor y regístrate ahora!
Explorando las certificaciones de GitHub
Obtener una certificación de GitHub es una poderosa afirmación de tus habilidades, credibilidad, confiabilidad y experiencia en las tecnologías y herramientas de desarrollo utilizadas por más de 100 millones de desarrolladores a nivel mundial. Actualmente, GitHub ofrece cuatro certificaciones, y en octubre, se lanzará la quinta certificación centrada en GitHub Copilot.
GitHub Foundations: destaca tu comprensión de los temas y conceptos fundamentales de colaborar, contribuir y trabajar en GitHub. Este examen cubre temas como colaboración, productos de GitHub, conceptos básicos de Git y trabajo dentro de repositorios de GitHub.
GitHub Actions: certifica tu competencia en la automatización de flujos de trabajo y la aceleración del desarrollo con GitHub Actions. Pon a prueba tus habilidades en la optimización de flujos de trabajo, la automatización de tareas y la optimización de pipelines de software, incluyendo CI/CD, dentro de flujos de trabajo totalmente personalizables.
GitHub Advanced Security: destaca tu conocimiento en seguridad de código con la certificación de GitHub Advanced Security. Valida tu experiencia en la identificación de vulnerabilidades, la seguridad de flujos de trabajo y la implementación de seguridad robusta, elevando los estándares de integridad del software.
GitHub Administration: certifica tu capacidad para optimizar y gestionar un entorno saludable de GitHub con el examen de GitHub Admin. Destaca tu experiencia en la gestión de repositorios, la optimización de flujos de trabajo y la colaboración eficiente para apoyar proyectos exitosos en GitHub.
Únete a nosotros este octubre para Learn Live GitHub Universe y obtén un cupón de descuento especial para una certificación de GitHub.
Microsoft Tech Community – Latest Blogs –Read More
Defender >>EndpointSecurity-Reports-Summary Vs EDR Onboarding status
I have a query related to MDE >>Endpoint Security>>Endpoint Detection and response
To check how many devices are on boarded noticing 2 menus
i.e. summary and EDR Onboarding status.
What is the difference between Summary and EDR onboarding status ?
Which menu value has to be factored in identifying the total devices that are onboarded successfully in MDE in a tenant?
I have a query related to MDE >>Endpoint Security>>Endpoint Detection and response To check how many devices are on boarded noticing 2 menus i.e. summary and EDR Onboarding status. What is the difference between Summary and EDR onboarding status ?Which menu value has to be factored in identifying the total devices that are onboarded successfully in MDE in a tenant? Read More
>EndpointSecurity-Reports-Summary Vs EDR Onboarding status” />
Tread dkim-signed mails from our own domain from external relayhost to exchange online, as internal
Hi Folks,
we use some 3rd-party web-applications that send mails to exchange online via a relay-service. These mails carry a valid dkim-signature for our primary domain (d=…). Sender address is our accepted (primary) maildomain.
Exchange Online marks these mails as outside of the organisation, as they are neither authenticated via sending-IP, nor certificate. As the relay-service is a shared service, we do not want to whitelist the ip or ssl-cert as others use this system as well.
Is there a way to use the dkim-signature to tread this mails as internal?
Thank you.
Siegmar
Hi Folks, we use some 3rd-party web-applications that send mails to exchange online via a relay-service. These mails carry a valid dkim-signature for our primary domain (d=…). Sender address is our accepted (primary) maildomain.Exchange Online marks these mails as outside of the organisation, as they are neither authenticated via sending-IP, nor certificate. As the relay-service is a shared service, we do not want to whitelist the ip or ssl-cert as others use this system as well. Is there a way to use the dkim-signature to tread this mails as internal? Thank you. Siegmar Read More
Azure Backup-SAP HANA DB Backup Delivers More Value at Lower TCO with Reduced Protected Instance Fee
Azure Backup for SAP HANA Database Delivers More Value at Lower TCO with Reduced Protected Instance Fees starting 1st Sept’2024
At Azure, our commitment to providing superior value to our customers is unwavering. We are thrilled to announce a significant update that will bring enhanced cost efficiency to our SAP HANA Database Backup service. Starting September 1, 2024, we are reducing the Protected Instance (PI) fees for “Azure Backup for SAP HANA on Azure VM.” This change is designed to deliver more value at a lower cost, making it easier for enterprises to protect their critical data without compromising on quality or performance.
New Pricing Structure: More Value, Lower Cost
With the new Protected Instance (PI) fee structure effective 1st September 2024, both SAP HANA Streaming/Backint-based backups and SAP HANA Snapshot-based backups will see reduced costs. Here’s how the new pricing breaks down:
HANA Backint/Streaming Backup
HANA Snapshot Backup
DB Size
Old pricing (Protected Instance Fee for East US2)
New pricing (Protected Instance Fee for East US2)
PI Cost Savings %
DB Size
Old pricing (Protected Instance Fee for East US2)
New pricing (Protected Instance Fee for East US2)
PI Cost Savings %
500 GB
$80
$80
No Change
1TB
$160
$80
50%
1 TB
$160
$80
50%
10 TB
$1600
$160
90%
5TB
$800
$80
90%
20 TB
$3200
$320
90%
10 TB
$1600
$80
95%
30 TB
$4800
$480
90%
HANA Streaming Backup: A flat rate of $80 (East US2) per instance, with standard regional uplift, regardless of the HANA database size.
For example, if you are protecting 1.2 TB of HANA database in one instance running in the East US2 region, the New PI cost would be flat $80 (East US2 Region) per month. Previously, the cost would have been $240 + storage consumed.
HANA Snapshot Backup: $80 (East US2) per 5 TB increment, with standard regional uplift.
For example, if you have 10 TB of HANA database in one instance running in East US2 region, the New PI cost would be $160 + storage consumed, Previously, the cost would have been $1600 + storage consumed. Following SAP recommendation, if you opt for a weekly full streaming backup in addition to a Snapshot backup, we will apply a single PI fee applicable for HANA Snapshot backup.
For more details on the new pricing structure, visit Pricing – Cloud Backup | Microsoft Azure and Pricing Calculator | Microsoft Azure.
Real-World Impact: Customer Scenarios
To better illustrate the impact of this pricing change, let’s consider two typical customer scenarios: The introduction of the new pricing model with compression has significantly reduced the overall backup costs across all the given instance configurations and customers can gain TCO savings ranging from 45% to 62%, indicating substantial cost efficiency gained through the new pricing. For large databases, we have implemented a snapshot backup technology that is both fast and cost-effective, utilizing a forever-incremental snapshot approach. This enhances the speed of backup and restore operations while also reducing the storage size required for backups.
Contoso (Small-Medium Enterprise), Manufacturing and Retail Industry: Customer running, 5 small HANA instances (600 GB each) and 2 large HANA instances (4 TB each) in the East US2 region with Backup Policy of Weekly Full backup and Daily Incremental – Retain Daily Backup for 7 days, retain weekly Backup for 4 weeks and retain Monthly backup for 3 months with ZRS resiliency.
Northwind Traders (Large Enterprise), FMCG Industry: FMCG, customer running 20 small HANA instances (1 TB each) and 5 large HANA instances (10 TB each) in the East US2 region. Backup Policy of Weekly Full and Daily Incremental – Retain Daily Backup for 7 days, retain weekly Backup for 4 weeks and retain Monthly backup for 6 months with ZRS resiliency.
Contoso -SME
Old Pricing
New Pricing +Compression
Overall TCO Savings
600 GB x 5 HANA Instances
PI
800
PI
400
Storage
573
Storage
352
PI+ Storage
1373
PI+ Storage
752
45%
4TB X 2 HANA Instances
PI
1440
PI
160
Storage
1567
Storage
962
PI+ Storage
3007
PI+ Storage
1122
62%
Northwind Traders -Large Enterprise Customer
Old Pricing
New Pricing + Compression
Overall TCO Savings
20 x 1 TB HANA Instances
PI
4,800
PI
1600
Storage
5,053
Storage
3094
PI+ Storage
9,853
PI+ Storage
4694
52%
5 x 10 TB HANA Instances
PI
8400
PI
400
Storage
12633
Storage
7735
PI+ Storage
21033
PI+ Storage
8135
61%
*Above backup calculation is using backint/streaming based backup
Why Azure Backup for SAP HANA?
Azure Backup for SAP HANA DB offers native backup support that seamlessly integrates with SAP HANA’s backint APIs. This solution allows businesses to efficiently back up and restore SAP HANA databases running on Azure VMs while taking advantage of the enterprise management capabilities that Azure Backup provides.
Here’s why Azure Backup stands out as a top choice for enterprises:
1.High Performant Backups with 15-Minute RPO for Rapid Recovery
Time is of the essence when it comes to data recovery, especially during a critical system failure or cyberattack. Azure Backup delivers an impressive 15-minute RPO (Recovery Point Objective), allowing you to recover essential data in under 15 minutes. This drastically reduces downtime, ensuring that your business can bounce back with minimal impact. For environments with heavy workloads, the speed of backup operations is crucial. Azure Backup achieves impressive performance, with speeds ranging from 1.2-1.5 GBps, thanks to multi-streaming support. This ensures that even large databases can be backed up efficiently, minimizing the time taken to secure critical data.
2.One-Click, Point-in-Time Restores with Continuous Protection
In case of an outage or a ransomware attack, Azure Backup offers point-in-time restores that leverage the power of log backups. With just one click, you can restore production databases to a specific point in time on alternative HANA servers, all while Azure manages backup chains behind the scenes. This is particularly crucial for environments using HANA System Replication (HSR), as it ensures the availability and continuity of critical data, even in complex setups.
3.Versatile Recovery Options for Maximum Flexibility
Whether it’s an accidental deletion, corruption, or disaster recovery, Azure Backup offers a variety of flexible recovery options:
Alternate Location Restore (ALR): Restore your database to a new target virtual machine (VM), giving you the ability to recover data in a different environment if the original one is compromised.
Original Location Restore (OLR): For in-place recovery, OLR allows you to restore data back to its original location, minimizing changes to your infrastructure.
Cross Region Restore (CRR): For enhanced disaster recovery capabilities, you can restore backup items to a secondary, Azure-paired region, ensuring that you can access your data even if one region becomes unavailable.
These versatile recovery options make it easy to meet your specific recovery needs, from minor system refreshes to major disaster recovery operations.
4.Long-Term Retention (LTR) with Cost Savings
Many industries require long-term data retention to meet compliance and auditing requirements. Azure Backup supports Long-Term Retention (LTR), enabling businesses to store backups for years. Plus, Azure’s archive tier allows you to move older recovery points to cheaper storage options, reducing costs while maintaining compliance.
5.Unmatched Ransomware Protection
Ransomware attacks are a growing threat, and organizations need to ensure that their backups are protected from such malicious actions. Azure Backup’s immutability feature ensures that backed-up data cannot be modified or deleted during a specified retention period, even if attackers gain access to the system. Additionally, with Soft Delete, deleted backups are retained for 14 days, ensuring they can be recovered even if accidentally or maliciously deleted.
These security measures, coupled with Multi-User Authorization, Private Endpoints, and Encryption, ensure that your backup data is protected from both internal and external threats.
6.Centralized Backup Management
Managing multiple backup and disaster recovery processes can be complex, but Azure Backup simplifies this with a single pane of glass for monitoring and managing backups across your environment. Whether you’re using the Azure portal, Azure CLI, Terraform, or SDK, Azure Backup provides a unified experience for managing all your backup needs at scale.
7.Native Compression for Storage Optimization
With HANA’s native compression feature, Azure Backup reduces storage consumption by approximately 30-50%, allowing businesses to optimize storage costs while maintaining robust backup processes. This native compression helps ensure that you are storing data more efficiently, without compromising on availability or performance.
Maximizing Storage Cost Efficiency
In addition to the reduced PI fees, there are several strategies you can employ to further optimize your storage costs:
Enable Native HANA Compression: Reduce backup storage consumption by approximately 30-50%, Enable compression via backup policy or while using “Backup Now” option.
Adjust Backup Policy: Switch from a “daily full” backup to a “weekly full” with “daily incremental” backups to save on storage costs.
Utilize Snapshot Backups: For large databases (> 10TB), snapshot backups are ideal as they start with a full backup and are forever incremental.
Opt for Reserved Pricing: Commit to a one- or three-year reservation to benefit from discounted Azure Backup Storage Reserved Capacity pricing.
Use Archive Tier: Leverage the archive tier for long-term retention of recovery points.
These changes underscore our dedication to helping you achieve greater cost efficiency and value in your SAP HANA database backups on Azure. We encourage you to review your backup strategies and take advantage of the new pricing and optimization options available.
Microsoft Tech Community – Latest Blogs –Read More
Help with formula
Hello,
I literally can not come up with the solution, and I have tried sum product or sum ifs,
match 2 criteria`s to find the data belonging to it. Which formula should I be using? index&match ?
so basically what I am trying is get the data show up in colom x that was booked with invoice2 and prod1 etc
Hello,I literally can not come up with the solution, and I have tried sum product or sum ifs, match 2 criteria`s to find the data belonging to it. Which formula should I be using? index&match ?so basically what I am trying is get the data show up in colom x that was booked with invoice2 and prod1 etc Read More
Keep pushing the boundaries. A journey with Parkinson’s
Meet Somesh Pathak, a Security MVP from the Netherlands, whose journey took an unexpected turn when he was diagnosed with early-onset Parkinson’s in 2020. From battling the stigma at work to finding strength in his family, Somesh’s story is one of resilience and determination. We asked him how this challenge has reshaped his journey as a professional, community leader, and father.
How has being diagnosed with early-onset Parkinson’s affected both your personal and professional life?
I was diagnosed with Young-onset Parkinson’s disease (YOPD) back in 2020, but early signs of hypokinetic rigid syndrome started in 2016–2017. I was living in India at the time and the doctors we’re not able to find the reason of my symptoms. I relocated to Stockholm in 2019 and underwent extensive tests at the Karolinska Hospital where I was diagnosed with idiopathic Parkinson’s disease.
Transformation from a healthy life to a life with a hidden disability has seriously affected my personal and professional life. One of the most difficult challenges I faced professionally was the lack of support from my previous manager at a former employer. At one point, I felt ashamed of having YOPD, primarily due to my symptoms and the stigma at work. Instead of receiving the assistance I needed, I was asked to work extended hours, which only stressed the situation.
Controlling the illness has also been quite challenging. One of the most difficult things I’ve experienced is feeling a lot of guilt, mostly related to my family and my 5-year-old son. I feel like I am not giving my all as even sports or lighthearted things suddenly have become challenging. Often depressed and dissatisfied, I feel as though I do not always reciprocate 100% the efforts of others who look after me.
Socially, my self-esteem and behavior around others has been impacted, because I fear my symptoms will show up or change randomly. The symptoms are always there, influencing my voice, sleep, diet, and general physical condition. Notwithstanding these difficulties, I try to keep as much normalcy as I can by balancing work and my personal life.
What has been the most challenging aspect of your Parkinson’s journey, and how have you managed to stay motivated throughout?
The psychological and emotional toll has been among the toughest aspects of my Parkinson’s path. Though the physical suffering was difficult, the heaviest toll on me was the feeling of frustration and helplessness. Dealing with YOPD already left the left side of my body weak, and after surgery to repair my ACL, my right side became significantly affected as well. This coupled with the understanding that healing could last up to nine months often made me feel as though I was losing the battle.
The unflinching support of my family, especially that of my wife and son— kept me going through these challenging times. My wife was my rock; she shared inspirational tales of those who had surmounted much more difficult obstacles in their life, therefore motivating me to keep on. Her support allowed me to concentrate on recovery and bounce back with full confidence. Another inspiration came from my son, reminding me of the need to be present and resilient for him.
Apart from my family, my colleagues have played a huge role in keeping me motivated. They encouraged me to keep pushing my boundaries, both physically and mentally, and gave me the confidence to face each day with a renewed sense of determination.
Though it has not been simple, with the emotional support of my loved ones and the strength I get from their belief in me, I have been motivated through one of the toughest obstacles of my life.
What does being a Microsoft MVP and contributing to the tech community mean to you?
Being a Microsoft MVP is about more than just title or recognition; it is about a dedication to significantly contribute to the tech community in meaningful ways. Every contribution — from blogging to event speaking to helping others solve their tech challenges, each contribution is an opportunity to make a positive impact. For me, it is about sharing knowledge, inspiring others, and being part of a larger community that thrives on innovation and collaboration.
Particularly on the tough days when other aspects of my life—such as dealing with health challenges—have been overwhelming, the MVP program has been a major source of encouragement. Knowing that my efforts are valued and that I can make a difference in someone’s learning journey motivates me to keep pushing forward. It gives me a sense of purpose and direction, even when times are difficult.
The community’s outpouring of support when I shared my personal story at a monthly Nordics & Benelux MVP call was quite inspiring.
My peers’ encouragement and solidarity reminded me of the power of a strong, supporting group and helped drive me. It is this sense of belonging, coupled with the opportunity to give back, that makes being a Microsoft MVP so special to me.
Based on your personal experiences and insights, what advice would you offer to someone going through a personal, health, or professional crisis?
Life’s challenges, be they physical injuries or health conditions, test our limits. But they also reveal our strength, resilience, and capacity to adapt. Through my journey of recovery and community contribution, I have learned that while the path may be tough, the destination is worth every struggle. There will be more challenges, more hurdles to overcome, but with every step, you must grow stronger
Keep pushing the boundaries. Stay strong, stay motivated, and continue making a difference. We are all in this together, and together, we can achieve greatness. There is no limit to what we can achieve and what we can accomplish when we refuse to give up.
To everyone out there facing their own battles, remember this: Keep pushing. Your spirit is stronger than you think. Use your challenges as a springboard to greater heights. Together, we can achieve incredible things, one step at a time.
Microsoft Tech Community – Latest Blogs –Read More
expand a dynamic named range
I work in insurance. I am building a template that can work with our customers who have a random number of health plan options (possibly more than one each of: health plan, dental plan, vision plan). The goal is to do a total plan design cost. There are 4 options with each potential plan choice: Employee Only (EO), Employee Plus Spouse (ESP), Employee Plus Children (ECH), Employee Plus Family (FAM).
I have a dynamic named range that concatenates all health plans, followed by all dental plans, followed by all vision plans, into a single column. This data set can be a minimum of 3 (1 health, 1 dental, 1 vision) to a maximum of who knows (as each type of policy can have many options).
What I need is to create a dynamic table, that has the first column be the plan name (ie, health1,health2,health3,dental1,dental2,vision1,vision2,vision3, etc…) in the first column. The second column would have EO, ESP, ECH, FAM (for the 4 options I listed earlier) for each item in the 1st column, then subsequent columns would have premium data that I can deal with.
The problem I’m having is coming up with a way to create this dynamic table where the health1,health2, etc. stuff is inserted in every 4th cell, so the 2nd column gives the 4 different options EO, ESP, ECH, FAM for each of the entries health1,health2, etc.
Anybody have a suggestion how to expand a dynamic named range so I can have it populate the first column of a dynamic length table every 4th row?
I work in insurance. I am building a template that can work with our customers who have a random number of health plan options (possibly more than one each of: health plan, dental plan, vision plan). The goal is to do a total plan design cost. There are 4 options with each potential plan choice: Employee Only (EO), Employee Plus Spouse (ESP), Employee Plus Children (ECH), Employee Plus Family (FAM). I have a dynamic named range that concatenates all health plans, followed by all dental plans, followed by all vision plans, into a single column. This data set can be a minimum of 3 (1 health, 1 dental, 1 vision) to a maximum of who knows (as each type of policy can have many options). What I need is to create a dynamic table, that has the first column be the plan name (ie, health1,health2,health3,dental1,dental2,vision1,vision2,vision3, etc…) in the first column. The second column would have EO, ESP, ECH, FAM (for the 4 options I listed earlier) for each item in the 1st column, then subsequent columns would have premium data that I can deal with. The problem I’m having is coming up with a way to create this dynamic table where the health1,health2, etc. stuff is inserted in every 4th cell, so the 2nd column gives the 4 different options EO, ESP, ECH, FAM for each of the entries health1,health2, etc. Anybody have a suggestion how to expand a dynamic named range so I can have it populate the first column of a dynamic length table every 4th row? Read More
The Future of AI: Fine-Tuning Llama 3.1 8B on Azure AI Serverless, why it’s so easy & cost efficient
The Future of AI: LLM Distillation just got easier
Part 2 – Fine-Tuning Llama 3.1 8B on Azure AI Serverless
How Azure AI Serverless Fine-tuning, LoRA, RAFT and the AI Python SDK are streamlining fine-tuning of domain specific models. (🚀🔥 Github recipe repo).
By Cedric Vidal, Principal AI Advocate, Microsoft
Part of the Future of AI 🚀 series initiated by Marco Casalaina with his Exploring Multi-Agent AI Systems blog post.
AI-powered engine fine-tuning setup, generated using Azure OpenAI DALL-E 3
In our previous blog post, we explored utilizing Llama 3.1 405B with RAFT to generate a synthetic dataset. Today, you’ll learn how to fine-tune a Llama 3.1 8B model with the dataset you generated. This post will walk you through a simplified fine-tuning process using Azure AI Fine-Tuning as a Service, highlighting its ease of use and cost efficiency. We’ll also explain what LoRA is and why combining RAFT with LoRA provides a unique advantage for efficient and affordable model customization. Finally, we’ll provide practical, step-by-step code examples to help you apply these concepts in your own projects. > The concepts and source code mentioned in this post are fully available in the Github recipe repo.
Azure AI takes the complexity out of the equation. Gone are the days when setting up GPU infrastructure, configuring Python frameworks, and mastering model fine-tuning techniques were necessary hurdles. Azure Serverless Fine-Tuning allows you to bypass the hassle entirely. Simply upload your dataset, adjust a few hyperparameters, and start the fine-tuning process. This ease of use democratizes AI development, making it accessible to a wider range of users and organizations.
Why Azure AI Serverless Fine-Tuning Changes the Game
Fine-tuning a model used to be a daunting task:
Skill Requirements: Proficiency in Python and machine learning frameworks like TensorFlow or PyTorch was essential.
Resource Intensive: Setting up and managing GPU infrastructure required significant investment.
Time-Consuming: The process was often lengthy, from setup to execution.
Azure AI Fine-Tuning as a Service eliminates these barriers by providing an intuitive platform where you can fine-tune models without worrying about the underlying infrastructure. With serverless capabilities, you simply upload your dataset, specify hyperparameters, and hit the “fine-tune” button. This streamlined process allows for quick iterations and experimentation, significantly accelerating AI development cycles.
Llama relaxing in a workshop, generated using Azure OpenAI DALL-E 3
LoRA: A Game-Changer for Efficient Fine-Tuning
What is LoRA?
LoRA (Low-order Rank Adaptation) is an efficient method for fine-tuning large language models. Unlike traditional fine-tuning, which updates all the model’s weights, LoRA modifies only a small fraction of the weights captured in an adapter. This focused approach drastically reduces the time and cost needed for fine-tuning while maintaining the model’s performance.
LoRA in Action
LoRA fine-tunes models by selectively adjusting a small fraction of weights via an adapter, offering several advantages:
Selective Weight Updating: Only a fraction of the weights are fine-tuned, reducing computational requirements.
Cost Efficiency: Lower computational demands translate to reduced operational costs.
Speed: Fine-tuning is faster, enabling quicker deployments and iterations.
Illustration of LoRA Fine-tuning. This diagram shows a single attention block enhanced with LoRA. Each attention block in the model typically incorporates its own LoRA module. SVG diagram generated using Azure OpenAI GPT-4o
Combining RAFT and LoRA: Why It’s So Effective
We’ve seen how Serverless Fine-tuning on Azure AI uses LoRA, which updates only a fraction of the weights of the model and can therefore be so cheap and fast.
With the combination of RAFT and LORA, the model is not taught new fundamental knowledge, indeed it becomes an expert at understanding the domain, focusing its attention on the citations that are the most useful to answer a question but it doesn’t contain all the information about the domain. It is like a librarian (see RAG Hack session on RAFT), a librarian doesn’t know the content of all the books perfectly, but it knows which books contain the answers to a given question.
Another way to look at it is from a standpoint of information theory. Because LoRA only updates a fraction of the weights, there is only so much information you can store in those weights as opposed to full weight fine tuning which updates all the weight bottom to top of the model.
LoRA might look like a limitation but it’s actually perfect when used in combination with RAFT and RAG. You get the best of RAG and fine-tuning. RAG provides access to a potentially infinite amount of reference documents and RAFT with LoRA provides a model which is an expert at understanding the documents retrieved by RAG at a fraction of the cost of full weight fine-tuning.
Azure AI Fine-Tuning API and the Importance of Automating your AI Ops Pipeline
Azure AI empowers developers with serverless fine-tuning via an API, simplifying the integration of fine-tuning processes into automated AI operations (AI Ops) pipelines. Organizations can use the Azure AI Python SDK to further streamline this process, enabling seamless orchestration of model training workflows. This includes systematic data handling, model versioning, and deployment. Automating these processes is crucial as it ensures consistency, reduces human error, and accelerates the entire AI lifecycle—from data preparation, through model training, to deployment and monitoring. By leveraging Azure AI’s serverless fine-tuning API, along with the Python SDK, organizations can maintain an efficient, scalable, and agile AI Ops pipeline, ultimately driving faster innovation and more reliable AI systems.
Addressing Model Drift and Foundation Model Obsolescence
One critical aspect of machine learning, especially in fine-tuning, is ensuring that models generalize well to unseen data. This is the primary purpose of the evaluation phase.
However, as domains evolve and documents are added or updated, models will inevitably begin to drift. The rate of this drift depends on how quickly your domain changes; it could be a month, six months, a year, or even longer.
Therefore, it’s essential to periodically refresh your model and execute the distillation process anew to maintain its performance.
Moreover, the field of AI is dynamic, with new and improved foundational models being released frequently. To leverage these advancements, you should have a streamlined process to re-run distillation on the latest models, enabling you to measure improvements and deploy updates to your users efficiently.
Why Automating the Distillation Process is Essential
Automation in the distillation process is crucial. As new documents are added or existing ones are updated, your model’s alignment with the domain can drift over time. Setting up an automated, end-to-end distillation pipeline ensures that your model remains current and accurate. By regularly re-running the distillation, you can keep the model aligned with the evolving domain, maintaining its reliability and performance.
Practical Steps: Fine-Tuning Llama 3.1 8B with RAFT and LoRA
Now that we’ve explained the benefits, let’s walk through the practical steps using the raft-distillation-recipe repository on GitHub.
If you have not yet run the synthetic data generation phase using RAFT, I invite you to head over the previous article of this blog series.
Once you have your synthetic dataset on hand, you can head over to the finetuning notebook of the distillation recipe repository.
Here are the key snippets of code illustrating how to use the Azure AI Python SDK to upload a dataset, subscribe to the Markerplace offer, create and submit a fine-tuning job on the Azure AI Serverless platform.
Uploading the training dataset
The following code checks if the training dataset already exists in the workspace and uploads it only if needed. It incorporates the hash of the dataset into the filename, facilitating easy detection of whether the file has been previously uploaded.
from azure.ai.ml.entities import Data
dataset_version = “1”
train_dataset_name = f”{ds_name}_train_{train_hash}”
try:
train_data_created = workspace_ml_client.data.get(train_dataset_name, version=dataset_version)
print(f”Dataset {train_dataset_name} already exists”)
except:
print(f”Creating dataset {train_dataset_name}”)
train_data = Data(
path=dataset_path_ft_train,
type=AssetTypes.URI_FILE,
description=f”{ds_name} training dataset”,
name=train_dataset_name,
version=dataset_version,
)
train_data_created = workspace_ml_client.data.create_or_update(train_data)
from azure.ai.ml.entities._inputs_outputs import Input
training_data = Input(
type=train_data_created.type, path=f”azureml://locations/{workspace.location}/workspaces/{workspace._workspace_id}/data/{train_data_created.name}/versions/{train_data_created.version}”
)
Subscribing to the Marketplace offer
This step is only necessary when fine-tuning a model from a third party vendor such as Meta or Mistral. If you’re fine-tuning a Microsoft first party model such as Phi 3 then you can skip this step.
from azure.ai.ml.entities import MarketplaceSubscription
model_id = “/”.join(foundation_model.id.split(“/”)[:-2])
subscription_name = model_id.split(“/”)[-1].replace(“.”, “-“).replace(“_”, “-“)
print(f”Subscribing to Marketplace model: {model_id}”)
from azure.core.exceptions import ResourceExistsError
marketplace_subscription = MarketplaceSubscription(
model_id=model_id,
name=subscription_name,
)
try:
marketplace_subscription = workspace_ml_client.marketplace_subscriptions.begin_create_or_update(marketplace_subscription).result()
except ResourceExistsError as ex:
print(f”Marketplace subscription {subscription_name} already exists for model {model_id}”)
Create the fine tuning job using the the model and data as inputs
finetuning_job = CustomModelFineTuningJob(
task=task,
training_data=training_data,
validation_data=validation_data,
hyperparameters={
“per_device_train_batch_size”: “1”,
“learning_rate”: str(learning_rate),
“num_train_epochs”: “1”,
“registered_model_name”: registered_model_name,
},
model=model_to_finetune,
display_name=job_name,
name=job_name,
experiment_name=experiment_name,
outputs={“registered_model”: Output(type=”mlflow_model”, name=f”ft-job-finetune-registered-{short_guid}”)},
)
Submit the fine-tuning job
The following snippet will submit the previously created fine-tuning job to the Azure AI serverless platform. If the submission is successful, the job details including the Studio URL and the registered model name will be printed. Any errors encountered during the submission will be displayed as well.
try:
print(f”Submitting job {finetuning_job.name}”)
created_job = workspace_ml_client.jobs.create_or_update(finetuning_job)
print(f”Successfully created job {finetuning_job.name}”)
print(f”Studio URL is {created_job.studio_url}”)
print(f”Registered model name will be {registered_model_name}”)
except Exception as e:
print(“Error creating job”, e)
raise e
The full runnable code is available in the previously mentioned finetuning notebook.
Join the Conversation
We invite you to join our tech community on Discord to discuss fine-tuning techniques, RAFT, LoRA, and more. Whether you’re a seasoned AI developer or just starting, our community is here to support you. Share your experiences, ask questions, and collaborate with fellow AI enthusiasts. Join us on Discord and be part of the conversation!
What’s next?
This concludes the second installment of our blog series on fine-tuning the Llama 3.1 8B model with RAFT and LoRA, harnessing the capabilities of Azure AI Serverless Fine-Tuning. Today, we’ve shown how these advanced technologies enable efficient and cost-effective model customization that precisely meets your domain needs.
By integrating RAFT and LoRA, you can transform your models into specialists that effectively navigate and interpret relevant information from extensive document repositories using RAG, all while significantly cutting down on the time and costs associated with full weight fine-tuning. This methodology accelerates the fine-tuning process and democratizes access to advanced AI capabilities.
With the detailed steps and code snippets provided, you now have the tools to implement serverless fine-tuning within your AI development workflow. Leveraging automation in AI Ops will help you maintain and optimize model performance over time, keeping your AI solutions competitive in an ever-changing environment.
Stay tuned! In two weeks, we’ll dive into the next topic: deploying our fine-tuned models.
Microsoft Tech Community – Latest Blogs –Read More
I dont understand how to add custom config to my app with android
Hi there
I am struggling to understand how I can add custom config to my app on android so that when users download the app from intune, the config automatically is passed with the installation.
The config is a key/value pair:
qrCode: “{“applicationName”:”LoremIpsum”,”baseUrl”:”https://LoremIpsum.com.au/01/api/mobile/“}”
acceptTermsAndConditions: true
The qrCode value is a string
The acceptTermsAndConditions value is a boolean
I have managed to get this working with ios very easily. Ios app config provides a nice UI to add key/value pair (first image). Android (second image) however does not provide this UI to add config and I dont understand why. Appreciate if someone can tell me how to get android config to be entered like ios that would be amazing.
Hi there I am struggling to understand how I can add custom config to my app on android so that when users download the app from intune, the config automatically is passed with the installation. The config is a key/value pair:qrCode: “{“applicationName”:”LoremIpsum”,”baseUrl”:”https://LoremIpsum.com.au/01/api/mobile/”}”acceptTermsAndConditions: true The qrCode value is a stringThe acceptTermsAndConditions value is a boolean I have managed to get this working with ios very easily. Ios app config provides a nice UI to add key/value pair (first image). Android (second image) however does not provide this UI to add config and I dont understand why. Appreciate if someone can tell me how to get android config to be entered like ios that would be amazing. Read More
Desktop support enrolling Autopilot devices – DeviceCapReached error
We’re currently in the middle of a quarterly equipment lease swap and have had a couple of people on our team getting the DeviceCapReached error when we go to enroll an Autopilot device. This is happening because we’re enrolling the devices with our accounts, rather than having the user sign in, then taking the laptop back from them to put it in the right on prem OU, run updates and install all of the software they need. I understand this isn’t how Microsoft designed Autopilot to work, but this is where we’re at.
I’ve done research into potential resolutions, but I have a lot of questions. First, some important details
User-driven deployment profile (future proofing, I guess)Microsoft Entra hybrid enrollmentIntune device enrollment limit – 7Azure tenant device limit – 20
The first option seems to be creating a script that clears out stale devices from our Azure tenant. When I’ve spoken with our Infrastructure team about device removal in the past, they said we’re using Entra Connect to sync with on prem AD, so they we’re against the idea. I’ve found a way to convince them otherwise, but it’s going to take time and scripting.
The next option is using a device enrollment manager account, but the Microsoft documentation mentions it enrolls the device in shared mode and that device limits won’t work on devices enrolled this way. It also says “Do not delete accounts assigned as a Device enrollment manager if any devices were enrolled using the account. Doing so will lead to issues with these devices.” but doesn’t elaborate further. So, this option seems like a dead end.
Third option is to increase the device enrollment quota in Azure, but since this is a tenant wide setting, we don’t necessarily want to give Rick in accounting the ability to enroll as many devices as he can carry.
I found a comment in this thread that suggested using Remove-AzureADDeviceRegisteredOwner (now Remove-MgDeviceRegisteredOwnerByRef with the graph modules). But this just change the primary user. Doing so didn’t stop me from getting the error message.
So here are my questions –
If you’ve gone through this, how did you resolve the issue?
What exactly are the consequences of using a DEM account to enroll devices?
If I look at the devices attached to my user account, and filter by Autopilot devices, I have 42. Other offices have a single desktop person, and they have > 80 devices. What device property, in which directory, causes this error?
Do you have a stale device script you’d recommend? I’ll write my own, for sure, but having something to go off of would be nice
We’re currently in the middle of a quarterly equipment lease swap and have had a couple of people on our team getting the DeviceCapReached error when we go to enroll an Autopilot device. This is happening because we’re enrolling the devices with our accounts, rather than having the user sign in, then taking the laptop back from them to put it in the right on prem OU, run updates and install all of the software they need. I understand this isn’t how Microsoft designed Autopilot to work, but this is where we’re at. I’ve done research into potential resolutions, but I have a lot of questions. First, some important detailsUser-driven deployment profile (future proofing, I guess)Microsoft Entra hybrid enrollmentIntune device enrollment limit – 7Azure tenant device limit – 20The first option seems to be creating a script that clears out stale devices from our Azure tenant. When I’ve spoken with our Infrastructure team about device removal in the past, they said we’re using Entra Connect to sync with on prem AD, so they we’re against the idea. I’ve found a way to convince them otherwise, but it’s going to take time and scripting. The next option is using a device enrollment manager account, but the Microsoft documentation mentions it enrolls the device in shared mode and that device limits won’t work on devices enrolled this way. It also says “Do not delete accounts assigned as a Device enrollment manager if any devices were enrolled using the account. Doing so will lead to issues with these devices.” but doesn’t elaborate further. So, this option seems like a dead end. Third option is to increase the device enrollment quota in Azure, but since this is a tenant wide setting, we don’t necessarily want to give Rick in accounting the ability to enroll as many devices as he can carry. I found a comment in this thread that suggested using Remove-AzureADDeviceRegisteredOwner (now Remove-MgDeviceRegisteredOwnerByRef with the graph modules). But this just change the primary user. Doing so didn’t stop me from getting the error message. So here are my questions -If you’ve gone through this, how did you resolve the issue?What exactly are the consequences of using a DEM account to enroll devices?If I look at the devices attached to my user account, and filter by Autopilot devices, I have 42. Other offices have a single desktop person, and they have > 80 devices. What device property, in which directory, causes this error?Do you have a stale device script you’d recommend? I’ll write my own, for sure, but having something to go off of would be nice Read More
MS Form multiple images
I created a simple feedback questionnaire using MS Forms. The image I selected not only shows in the background, but is also shown as a duplicate on the front image. It doesn’t matter what Customized layout I use.
Is it possible to have 2 different images on an MS Form? One on the background and the front one changed?
I created a simple feedback questionnaire using MS Forms. The image I selected not only shows in the background, but is also shown as a duplicate on the front image. It doesn’t matter what Customized layout I use. Is it possible to have 2 different images on an MS Form? One on the background and the front one changed? Read More
Partner Case Study Series | Dynamica Google Maps Integration brings Google Maps to Dynamics
Adaptability meets quality in Dynamica Labs
Dynamica Labs is a Microsoft Gold Partner whose team has pursued customer relationship management (CRM) development for more than 14 years. Its goal is to deliver the highest quality at an affordable rate. It accomplishes that using a hybrid delivery model: The company is based in London, UK, with a nearshore development center in Eastern Europe. Its expertise includes industrial and pharmaceutical distribution, professional services, commercial real estate, high-tech companies, and IT support services.
The company’s Dynamica Google Maps Integration solution, available on Microsoft AppSource, is a records location tool that is ideal for real estate and distribution companies.
“Google Maps is the leading geospatial database, providing a lot of tools and information for businesses,” said Igor Sarov, CEO at Dynamica Labs. “The Dynamica Google Maps Integration solution combines the power of the Microsoft Dynamics platform and the largest geodatabase. This makes it a perfect tool for companies that do daily route planning, and companies that want to quickly assess a building or location.”
Continue reading here
**Explore all case studies or submit your own**
Microsoft Tech Community – Latest Blogs –Read More
Upload file to OneDrive folder via php /which product do I / my customer need?
Hi,
I’m totally new to this topic.
I’m about to develop an app (Backend:PHP, Frontend:JS) which should be able to push files (generated in php on my server) to an OneDrive Folder of my customer. I am stuck at which product do I (for developing) and later my customer need for that.
I think I need the “client credentials flow”, which MS product/account is needed for that?
Is an onedrive account enough?
Do I/my customer need an additional azure account to register the app?
Thanks for help
Hi,I’m totally new to this topic.I’m about to develop an app (Backend:PHP, Frontend:JS) which should be able to push files (generated in php on my server) to an OneDrive Folder of my customer. I am stuck at which product do I (for developing) and later my customer need for that.I think I need the “client credentials flow”, which MS product/account is needed for that?Is an onedrive account enough?Do I/my customer need an additional azure account to register the app?Thanks for help Read More
Mail merge
Does anyone here have experience with merging Excel data into Word documents? I had one of those days with Office, I would sooner forget. I have an XLM file – not too heavy, but with OnCalculate VBA – which used to supply data into Word without any problems when we were still using Dropbox and pre-365 environment. Since moving to O365 and SPO, the Word will happily take 15 minutes over opening the document.
There is some weird trick by which I can force Word to give up on DDE and switch to something called OLEDB. It brings the time down to 1 mins – still pretty rubbish. I talked to my new friend ChatGBT about it and I now know that I am in trouble because, after initially coming up with many helpful suggestions, it in the end referred me to Microsoft Support … 😞
One of its suggestions was to rebuild the Word doc from scratch. That wasn’t a big deal because the doc in question is a one-pager. A bit of CTRL-C/CTRL-V got the job done in no time. It also suggested that I move the Excel off SPO. I saved a copy into Downloads before making a new connection from the new doc. But here is the weirdest thing: when the OLEDB connections eventually resolved – we are talking here no less than 5 mins – the fields in Word did not contain the values from the open Excel. The values that displayed in the Recipient list were values from the file on SPO. Not that Word would actually populate the preview mailings.
Does this ring any bells with anyone? Thanks.
Does anyone here have experience with merging Excel data into Word documents? I had one of those days with Office, I would sooner forget. I have an XLM file – not too heavy, but with OnCalculate VBA – which used to supply data into Word without any problems when we were still using Dropbox and pre-365 environment. Since moving to O365 and SPO, the Word will happily take 15 minutes over opening the document. There is some weird trick by which I can force Word to give up on DDE and switch to something called OLEDB. It brings the time down to 1 mins – still pretty rubbish. I talked to my new friend ChatGBT about it and I now know that I am in trouble because, after initially coming up with many helpful suggestions, it in the end referred me to Microsoft Support … 😞 One of its suggestions was to rebuild the Word doc from scratch. That wasn’t a big deal because the doc in question is a one-pager. A bit of CTRL-C/CTRL-V got the job done in no time. It also suggested that I move the Excel off SPO. I saved a copy into Downloads before making a new connection from the new doc. But here is the weirdest thing: when the OLEDB connections eventually resolved – we are talking here no less than 5 mins – the fields in Word did not contain the values from the open Excel. The values that displayed in the Recipient list were values from the file on SPO. Not that Word would actually populate the preview mailings. Does this ring any bells with anyone? Thanks. Read More
Ajio Invite Code? KAR1CY40K (Claim Exclusive Now)
Ajio Invite code is KAR1CY40K, using this invite code you can claim an exclusive bonus & extra discount on your purchase in Ajio. You can also use this bonus at the time of shopping for any product from Ajio. Also share your Invite code with your friend to earn upto Rs.2000. Ajio is one of India’s fastest-growing shopping platforms offering lightning-fast product delivery and original quality experience to its users.
What is Ajio Invite Code?
KAR1CY40K is Ajio app Invite code. By applying Invite code you will get the best signup bonus upto Rs.1500. You can earn up to Rs.2000 on sharing your invite code with your friends. Ajio offers Rs.250 extra off on purchase from the store using KAR1CY40K code.
Ajio Invite Code 2024
App Name
Ajio
Ajio Invite Code
KAR1CY40K
Sign Up Rewards
Exclusive Bonus
Per Invite
Rs.2000
Cashback
Rs.1550
Ajio Invite code is KAR1CY40K, using this invite code you can claim an exclusive bonus & extra discount on your purchase in Ajio. You can also use this bonus at the time of shopping for any product from Ajio. Also share your Invite code with your friend to earn upto Rs.2000. Ajio is one of India’s fastest-growing shopping platforms offering lightning-fast product delivery and original quality experience to its users. What is Ajio Invite Code?KAR1CY40K is Ajio app Invite code. By applying Invite code you will get the best signup bonus upto Rs.1500. You can earn up to Rs.2000 on sharing your invite code with your friends. Ajio offers Rs.250 extra off on purchase from the store using KAR1CY40K code.Ajio Invite Code 2024App NameAjioAjio Invite CodeKAR1CY40KSign Up RewardsExclusive BonusPer InviteRs.2000Cashback Rs.1550 Read More
Per App Content Filter on iOS
I am testing Per App Content Filter(iOS 16 onwards) feature for iOS. Per App Content Filter entitlements can run on a managed device only. Hence these entitlements must be pushed through MDM
Apple documentation on
https://developer.apple.com/documentation/networkextension/content_filter_providers?language=objc
So far research on Intune concluded that Intune does not support it like it supports per app VPN.
Then I tried pushing content filter profile as custom profile and ContentFilterUUID as App configuration policy by targeting it to 3rd party app. Content filter gets pushed but it does not get mapped to 3rd party app.So it does not run until mapping is appropriate and remain in invalid state.
Can anyone help me how can I achieve it on Intune?
Side Note: JAMF provides this built in like per app vpn and I could see payload(from iOS sys logs) is like below
NESMFilterSession[Content Filter 16 May 2024:5F0ABFF4-5414-40D4-AD95-AE207D890720]: handling configuration changed: {
name = <26-char-str>
identifier = 5F0ABFF4-5414-40D4-AD95-AE207D890720
externalIdentifier = <36-char-str>
application = com.test.ent.app
grade = 1
contentFilter = {
enabled = YES
provider = {
pluginType = com.test.ent.app
organization = <7-char-str>
filterBrowsers = NO
filterPackets = NO
filterSockets = YES
disableDefaultDrop = NO
preserveExistingConnections = NO
}
filter-grade = 1
per-app = {
appRules = (
{
matchSigningIdentifier = org.mozilla.ios.Firefox
noDivertDNS = NO
},
)
excludedDomains = ()
}
}
payloadInfo = {
payloadUUID = FC494E29-90AE-4C56-B57A-2E501A17553A
payloadOrganization = <13-char-str>
profileUUID = C2074E3F-39F1-4A48-B979-FE13C0FBC779
profileIdentifier = <36-char-str>
isSetAside = NO
profileIngestionDate = 2024-08-16 21:30:23 +0000
systemVersion = Version 17.5.1 (Build 21F90)
profileSource = mdm
}
}
I am testing Per App Content Filter(iOS 16 onwards) feature for iOS. Per App Content Filter entitlements can run on a managed device only. Hence these entitlements must be pushed through MDMApple documentation on https://developer.apple.com/documentation/technotes/tn3134-network-extension-provider-deployment?language=objchttps://developer.apple.com/documentation/networkextension/content_filter_providers?language=objc So far research on Intune concluded that Intune does not support it like it supports per app VPN.Then I tried pushing content filter profile as custom profile and ContentFilterUUID as App configuration policy by targeting it to 3rd party app. Content filter gets pushed but it does not get mapped to 3rd party app.So it does not run until mapping is appropriate and remain in invalid state. Can anyone help me how can I achieve it on Intune? Side Note: JAMF provides this built in like per app vpn and I could see payload(from iOS sys logs) is like below NESMFilterSession[Content Filter 16 May 2024:5F0ABFF4-5414-40D4-AD95-AE207D890720]: handling configuration changed: {
name = <26-char-str>
identifier = 5F0ABFF4-5414-40D4-AD95-AE207D890720
externalIdentifier = <36-char-str>
application = com.test.ent.app
grade = 1
contentFilter = {
enabled = YES
provider = {
pluginType = com.test.ent.app
organization = <7-char-str>
filterBrowsers = NO
filterPackets = NO
filterSockets = YES
disableDefaultDrop = NO
preserveExistingConnections = NO
}
filter-grade = 1
per-app = {
appRules = (
{
matchSigningIdentifier = org.mozilla.ios.Firefox
noDivertDNS = NO
},
)
excludedDomains = ()
}
}
payloadInfo = {
payloadUUID = FC494E29-90AE-4C56-B57A-2E501A17553A
payloadOrganization = <13-char-str>
profileUUID = C2074E3F-39F1-4A48-B979-FE13C0FBC779
profileIdentifier = <36-char-str>
isSetAside = NO
profileIngestionDate = 2024-08-16 21:30:23 +0000
systemVersion = Version 17.5.1 (Build 21F90)
profileSource = mdm
}
} Read More