Category: Microsoft
Category Archives: Microsoft
Recent changes could not be saved. Refresh the page to undo the changes and continue working
Hello,
Over the past few days I have experienced challenges with the Project for the web part.
It consists of changes not saved in the project folder. Instead, this error message appears.
“Recent changes could not be saved. Refresh the page to undo the changes and continue working. Correlation ID: 6883f5fd-35f0-4bef-8423-8e5bf2dfd36f”
What does this mean and what must be done so that I can use Project for the web again?
Best regards,
Hello, Over the past few days I have experienced challenges with the Project for the web part.It consists of changes not saved in the project folder. Instead, this error message appears.”Recent changes could not be saved. Refresh the page to undo the changes and continue working. Correlation ID: 6883f5fd-35f0-4bef-8423-8e5bf2dfd36f” What does this mean and what must be done so that I can use Project for the web again? Best regards, Read More
How pricing works on Azure Stack HCI?
Hi Experts,
We are planning to use Azure Stack HCI, one of our questions is how the pricing works for SQL on a VM and SQL using PaaS on HCI?
In our understanding SQL on a VM requires: SQL license, OS license, compute and storage cost.
Paas SQL on HCI requires compute and storage cost.
is this correct and which one is the most cost effective and if can you expound why?
Thank you in advance.
Hi Experts, We are planning to use Azure Stack HCI, one of our questions is how the pricing works for SQL on a VM and SQL using PaaS on HCI?In our understanding SQL on a VM requires: SQL license, OS license, compute and storage cost.Paas SQL on HCI requires compute and storage cost. is this correct and which one is the most cost effective and if can you expound why? Thank you in advance. Read More
matching excel row data to filename based on matching data left of underscore
Hoping somebody can help with what I believe can be done either using Power Query or VBA.
I have an Excel file that includes thousands of entries – each entry has an object number and SOME objects will have an associated file on the hard drive (or SharePoint / remote server). I would like the script to be able to review the data from the file name left of the first underscore to match with the object number and record the filename in the correct field “demo file path” in the attached sample file. E.g. object number 123 is related to file 123_has-extra_from_3733-AREA_RULE_Progress_Report_Feb._231955.pdf
I am OK if the system needs to place a message in the related entry that states “no match found”, it is a bonus but not required.
Hopefully with the sample data that includes a second tab of the file that are in the existing folder and the above details somebody can help out.
If the solution cannot read directly from a specific folder / site and needs a worksheet with all the file names, I can easily work with that solution.
Thank you.
Sample File Hoping somebody can help with what I believe can be done either using Power Query or VBA. I have an Excel file that includes thousands of entries – each entry has an object number and SOME objects will have an associated file on the hard drive (or SharePoint / remote server). I would like the script to be able to review the data from the file name left of the first underscore to match with the object number and record the filename in the correct field “demo file path” in the attached sample file. E.g. object number 123 is related to file 123_has-extra_from_3733-AREA_RULE_Progress_Report_Feb._231955.pdf I am OK if the system needs to place a message in the related entry that states “no match found”, it is a bonus but not required. Hopefully with the sample data that includes a second tab of the file that are in the existing folder and the above details somebody can help out. If the solution cannot read directly from a specific folder / site and needs a worksheet with all the file names, I can easily work with that solution. Thank you. Read More
How can I mute and unmute yourself from Windows taskbar in Microsoft Teams?
Specifically, I’m trying to find a way to mute and unmute myself directly from the Windows taskbar. I’ve noticed that sometimes my audio is turned on and I’m accidentally broadcasting my background noise to the team. I’d like to be able to quickly mute and unmute myself without having to navigate through the Microsoft Teams app or desktop interface.
Does anyone know how to access the audio settings from the Windows taskbar, or if there’s a shortcut or hotkey that allows me to quickly mute and unmute myself in Microsoft Teams?
Specifically, I’m trying to find a way to mute and unmute myself directly from the Windows taskbar. I’ve noticed that sometimes my audio is turned on and I’m accidentally broadcasting my background noise to the team. I’d like to be able to quickly mute and unmute myself without having to navigate through the Microsoft Teams app or desktop interface.Does anyone know how to access the audio settings from the Windows taskbar, or if there’s a shortcut or hotkey that allows me to quickly mute and unmute myself in Microsoft Teams? Read More
Generating a list of Dates using nested array inside SEQUENCE function?
Hello,
I am trying to generate a list of dates with different intervals using an array inside of a single SEQUENCE function. How do I do this? Cell references are in bracket below and the results I’m looking for.
Ex:
Start Date End Date Day Interval
(A1) 2024-02-15 (B1) 2024-03-07 (C1) 7
(A2) 2024-02-23 (B2) 2024-03-22 (C2) 14
Looking to generate
2024-02-15
2024-02-22
2024-02-29
2024-03-07
2024-02-23
2024-03-08
2024-03-22
Basically it generates a list of values going through each start date in the array (A1:A2) and their respective interval (C1:C2). I am trying to use one instance of the SEQUENCE function to do this nesting arrays inside instead of using a SEQUENCE function for each start date. Any ideas? Thank you in advance.
Hello, I am trying to generate a list of dates with different intervals using an array inside of a single SEQUENCE function. How do I do this? Cell references are in bracket below and the results I’m looking for. Ex:Start Date End Date Day Interval(A1) 2024-02-15 (B1) 2024-03-07 (C1) 7(A2) 2024-02-23 (B2) 2024-03-22 (C2) 14 Looking to generate 2024-02-152024-02-222024-02-292024-03-072024-02-232024-03-082024-03-22 Basically it generates a list of values going through each start date in the array (A1:A2) and their respective interval (C1:C2). I am trying to use one instance of the SEQUENCE function to do this nesting arrays inside instead of using a SEQUENCE function for each start date. Any ideas? Thank you in advance. Read More
Retrieve deleted videos from Stream(classic)
Hello,
Our team had a few meeting recordings which were saved on Microsoft Stream (Classic). Recently when we try to access those videos, it appears not accessible. We realize that the Stream (Classic) has been retired since 15 April. Is there any way to retrieve the videos? Please advise.
Regards,
Manoj
Hello, Our team had a few meeting recordings which were saved on Microsoft Stream (Classic). Recently when we try to access those videos, it appears not accessible. We realize that the Stream (Classic) has been retired since 15 April. Is there any way to retrieve the videos? Please advise. Regards,Manoj Read More
Office Script to add image
Hi! I am using the following code to add image into excel sheet automatically. However, it only works for JPG file and not for JPEG and PNG file. May I modify the following code and let it works for 3 types of image files? Thank you very much.
function main(workbook: ExcelScript.Workbook, base64ImageString: string,imageName: string) {
// get table row count
let sheet1 = workbook.getWorksheet(‘In progress’);
let table1 = workbook.getTable(‘Inprogress’);
let rowCount = table1.getRowCount();
let n1:number = rowCount + 1;
let range1:string = ”;
let imageAddress:string = ”;
let nameAddress:string = ”;
imageAddress = ‘J’ + n1.toString();
nameAddress = ‘N’ + n1.toString();
range1 = n1 + ‘:’ + n1
sheet1.getRange(range1).getFormat().setRowHeight(230);
let range = sheet1.getRange(imageAddress);
let image = sheet1.addImage(base64ImageString);
image.setName(imageName);
image.setTop(range.getTop());
image.setLeft(range.getLeft());
image.setWidth(300);
image.setHeight(225);
sheet1.getRange(nameAddress).setValue(imageName)
}
Hi! I am using the following code to add image into excel sheet automatically. However, it only works for JPG file and not for JPEG and PNG file. May I modify the following code and let it works for 3 types of image files? Thank you very much. function main(workbook: ExcelScript.Workbook, base64ImageString: string,imageName: string) {
// get table row count
let sheet1 = workbook.getWorksheet(‘In progress’);
let table1 = workbook.getTable(‘Inprogress’);
let rowCount = table1.getRowCount();
let n1:number = rowCount + 1;
let range1:string = ”;
let imageAddress:string = ”;
let nameAddress:string = ”;
imageAddress = ‘J’ + n1.toString();
nameAddress = ‘N’ + n1.toString();
range1 = n1 + ‘:’ + n1
sheet1.getRange(range1).getFormat().setRowHeight(230);
let range = sheet1.getRange(imageAddress);
let image = sheet1.addImage(base64ImageString);
image.setName(imageName);
image.setTop(range.getTop());
image.setLeft(range.getLeft());
image.setWidth(300);
image.setHeight(225);
sheet1.getRange(nameAddress).setValue(imageName)
} Read More
RAG on structured data with PostgreSQL
RAG (Retrieval Augmented Generation) is one of the most promising uses for large language models. Instead of asking an LLM a question and hoping the answer lies somewhere in its weights, we instead first query a knowledge base for anything relevant to the question, and then feed both those results and the original question to the LLM.
We have many RAG solutions out there for asking questions on unstructured documents, like PDFs and Word Documents. Our most popular Azure solution for this scenario includes a data ingestion process to extract the text from the documents, chunk them up into appropriate sizes, and store them in an Azure AI Search index. When your RAG is on unstructured documents, you’ll always need a data ingestion step to store them in an LLM-compatible format.
But what if you just want users to ask questions about structured data, like a table in a database? Imagine customers that want to ask questions about the products in a store’s inventory, and each product is a row in the table. We can use the RAG approach there, too, and in some ways, it’s a simpler process.
To get you started with this flavor of RAG, we’ve created a new RAG-on-PostgreSQL solution that includes a FastAPI backend, React frontend, and infrastructure-as-code for deploying it all to Azure Container Apps with Azure PostgreSQL Flexible Server. Here it is with the sample seed data:
We use the user’s question to query a single PostgreSQL table and send the matching rows to the LLM. We display the answer plus information about any of the referenced products from the answer. Now let’s break down how that solution works.
Data preparation
When we eventually query the database table with the user’s query, we ideally want to perform a hybrid search: both a full text search and a vector search of any columns that might match the user’s intent. In order to perform a vector search, we also need a column that stores a vector embedding of the target columns.
This is what the sample table looks like, described using SQLAlchemy 2.0 model classes. The final embedding column is a Vector type, from the pgvector extension for PostgreSQl:
class Item(Base):
__tablename__ = “items”
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
type: Mapped[str] = mapped_column()
brand: Mapped[str] = mapped_column()
name: Mapped[str] = mapped_column()
description: Mapped[str] = mapped_column()
price: Mapped[float] = mapped_column()
embedding: Mapped[Vector] = mapped_column(Vector(1536))
The embedding column has 1536 dimensions to match OpenAI’s text-embedding-ada-002 model, but you could configure it to match the dimensions of different embedding models instead. The most important thing is to know exactly which model you used for generating embeddings, so then we can later search with that same model.
To compute the value of the embedding column, we concatenate the text columns from the table row, send them to the OpenAI embedding model, and store the result:
items = session.scalars(select(Item)).all()
for item in items:
item_for_embedding = f”Name: {self.name} Description: {self.description} Type: {self.type}”
item.embedding = openai_client.embeddings.create(
model=EMBED_DEPLOYMENT,
input=item_for_embedding
).data[0].embedding
session.commit()
We only need to run that once, if our data is static. However, if any of the included columns change, we should re-run that for the changed rows. Another approach is to use the Azure AI extension for Azure PostgreSQL Flexible Server. I didn’t use it in my solution since I also wanted it to run with a local PostgreSQL server, but it should work great if you’re always using the Azure-hosted PostgreSQL Flexible Server.
Hybrid search in PostgreSQL
Now our database table has both text columns and a vector column, so we should be able to perform a hybrid search: using the pgvector distance operator on the embedding column, using the built-in full-text search functions on the text columns, and merging them using the Reciprocal-Rank Fusion algorithm.
We use this SQL query for hybrid search, inspired by an example from the pgvector-python repository:
vector_query = f”””
SELECT id, RANK () OVER (ORDER BY embedding <=> :embedding) AS rank
FROM items
ORDER BY embedding <=> :embedding
LIMIT 20
“””
fulltext_query = f”””
SELECT id, RANK () OVER (ORDER BY ts_rank_cd(to_tsvector(‘english’, description), query) DESC)
FROM items, plainto_tsquery(‘english’, :query) query
WHERE to_tsvector(‘english’, description) @@ query
ORDER BY ts_rank_cd(to_tsvector(‘english’, description), query) DESC
LIMIT 20
“””
hybrid_query = f”””
WITH vector_search AS (
{vector_query}
),
fulltext_search AS (
{fulltext_query}
)
SELECT
COALESCE(vector_search.id, fulltext_search.id) AS id,
COALESCE(1.0 / (:k + vector_search.rank), 0.0) +
COALESCE(1.0 / (:k + fulltext_search.rank), 0.0) AS score
FROM vector_search
FULL OUTER JOIN fulltext_search ON vector_search.id = fulltext_search.id
ORDER BY score DESC
LIMIT 20
“””
results = session.execute(sql,
{“embedding”: to_db(query_vector), “query”: query_text, “k”: 60},
).fetchall()
That hybrid search is missing the final step that we always recommend for Azure AI Search: semantic ranker, a re-ranking model that sorts the results according to the original user queries. It should be possible to add a re-ranking model, as shown in another pgvector-python example, but such an addition requires loadtesting and possibly an architectural change, since re-ranking models are CPU-intensive. Ideally, the re-ranking model would be deployed on dedicated infrastructure optimized for model running, not on the same server as our app backend.
We get fairly good results from that hybrid search query, however! It easily finds rows that both match the exact keywords in a query and semantically similar phrases, as demonstrated by these user questions:
Function calling for SQL filtering
The next step is to handle user queries like, “climbing gear cheaper than $100.” Our hybrid search query can definitely find “climbing gear”, but it’s not designed to find products whose price is lower than some amount. The hybrid search isn’t querying the price column at all, and isn’t appropriate for a numeric comparison query anyway. Ideally, we would do both a hybrid search and add a filter clause, like WHERE price < 100.
Fortunately, we can use an LLM to suggest filter clauses based on user queries, and the OpenAI GPT models are very good at it. We add a query-rewriting phase to our RAG flow which uses OpenAI function calling to come up with the optimal search query and column filters.
In order to use OpenAI function calling, we need to describe the function and its parameters. Here’s what that looks like for a search query and single column’s filter clause:
{
“type”: “function”,
“function”: {
“name”: “search_database”,
“description”: “Search PostgreSQL database for relevant products based on user query”,
“parameters”: {
“type”: “object”,
“properties”: {
“search_query”: {
“type”: “string”,
“description”: “Query string to use for full text search, e.g. ‘red shoes'”
},
“price_filter”: {
“type”: “object”,
“description”: “Filter search results based on price of the product”,
“properties”: {
“comparison_operator”: {
“type”: “string”,
“description”: “Operator to compare the column value, either ‘>’, ‘<‘, ‘>=’, ‘<=’, ‘='”
},
“value”: {
“type”: “number”,
“description”: “Value to compare against, e.g. 30”
}
}
}
}
}
}
}
We can easily add additional parameters for other column filters, or we could even have a generic column filter parameter and have OpenAI suggest the column based on the table schema. For my solution, I am intentionally constraining the LLM to only suggest a subset of possible filters, to minimize risk of SQL injection or poor SQL performance. There are many libraries out there that do full text-to-SQL, and that’s another approach you could try out, if you’re comfortable with the security of those approaches.
When we get back the results from the function call, we use it to build a filter clause, and append that to our original hybrid search query. We want to do the filtering before the vector and full text search, to narrow down the search space to only what could possibly match. Here’s what the new vector search looks like, with the additional filter clause:
vector_query = f”””
SELECT id, RANK () OVER (ORDER BY embedding <=> :embedding) AS rank
FROM items
{filter_clause}
ORDER BY embedding <=> :embedding
LIMIT 20
“””
With the query rewriting and filter building in place, our RAG app can now answer questions that depend on filters:
RAG on unstructured vs structured data
Trying to decide what RAG approach to use, or which of our solutions to use for a prototype? If your target data is largely unstructured documents, then you should try out our Azure AI Search RAG starter solution which will take care of the complex data ingestion phase for you. However, if your target data is an existing database table, and you want to RAG over a single table (or a small number of tables), the try out the PostgreSQL RAG starter solution and modify it to work with your table schema. If your target data is a database with a multitude of tables with different schemas, then you probably want to research full text-to-SQL solutions. Also check out the llamaindex and langchain libraries, as they often have functionality and samples for common RAG scenarios.
Microsoft Tech Community – Latest Blogs –Read More
Boosting Code Security with GHAS Code Scanning in Azure DevOps & GitHub
Code scanning, a pipeline-based tool available in GitHub Advanced Security, is designed to detect code vulnerabilities and bugs within the source code of ADO (Azure DevOps) repositories. Utilizing CodeQL as a static analysis tool, it performs query analysis and variant analysis. When vulnerabilities are found, it generates security alerts.
CodeQL
CodeQL is a powerful static analysis tool used for showing vulnerabilities and bugs in source code. It enables developers to write custom queries that analyze codebases, searching for specific patterns and potential security issues. By converting code into a database format, CodeQL allows for sophisticated, database-like queries to detect flaws.
CodeQL in Action
1. Preparing the Code
Create a CodeQL Database: Extract and structure the code into a database for analysis.
2. Running CodeQL Queries
Execute Queries: Run predefined or custom queries against the database to find potential issues.
3. Interpreting the Query Results
Review Findings: Analyze the results to find, prioritize, and address vulnerabilities and code quality issues.
Reference: – About the CodeQL CLI – GitHub Docs
Sample Code Scanning Azure DevOps Pipeline
Once the GitHub Advanced security is configured for the ADO Repo we can then create and run a dedicated Code scanning pipeline to detect vulnerability & generate query results & alerts.
Below is a generic sample Code scanning pipeline
Prerequisites: –
GitHub Token (GitHub token): Required Pipeline Variable for authenticated operations with GitHub.
CodeQL Results File Path (codeql_results_file): Predefined in the pipeline YAML variable to specify where the analysis results are stored.
SARIF SAST Scans Tab extension: Need to install it from Azure DevOps Marketplace to view query results
# Author: Debjyoti
# This pipeline uses default CodeQL queries for code scanning
trigger: none
pool:
vmImage: ‘windows-latest’
variables:
codeql_results_file: ‘$(Build.ArtifactStagingDirectory)/results.sarif’
steps:
– task: AdvancedSecurity-Codeql-Init@1
displayName: ‘Initialize CodeQL’
inputs:
languages: ‘python’
loglevel: ‘2’
env:
GITHUB_TOKEN: $(githubtoken)
– task: AdvancedSecurity-Codeql-Autobuild@1
displayName: ‘AutoBuild’
– task: AdvancedSecurity-Codeql-Analyze@1
displayName: ‘Perform CodeQL Analysis’
inputs:
outputFile: ‘$(codeql_results_file)’
– task: PublishBuildArtifacts@1
displayName: ‘Publish CodeQL Results’
inputs:
pathToPublish: ‘$(codeql_results_file)’
artifactName: ‘CodeQLResults’
For further insights and detailed guides, please refer to the following articles:
Default setup of Code Scanning in GitHub Repository
Requirements for Using Default Setup
GitHub Actions: Must be enabled.
Recommendations
Enable default setup if there is any chance of including at least one CodeQL-supported language in the future.
Default setup will not run or use GitHub Actions minutes if no CodeQL-supported languages are present.
If CodeQL-supported languages are added, default setup will automatically begin scanning and using minutes.
Customizing Default Setup
Start with default setup.
Evaluate code scanning performance.
Customize if needed to better meet security needs.
Configuring Default Setup for a Repository
Automatic Analysis: All CodeQL-supported languages will be analyzed.
Successful Analysis: Languages analyzed successfully will be retained.
Unsuccessful Analysis: Languages not analyzed successfully will be deselected.
Failure Handling: If all analyses fail, default setup stays enabled but inactive until a supported language is added, or setup is manually reconfigured.
Steps to Enable Default Setup
Navigate to Repository: Go to the main page of the repository.
Access Settings:
Click on “Settings” under the repository name.
If “Settings” is not visible, select the dropdown menu and click “Settings”.
Security Settings:
In the “Security” section of the sidebar, click “Code security and analysis”.
4. Setup Code Scanning: In the “Code scanning” section, select “Set up” and click “Default”.
Review Configuration:
A dialog will summarize the automatically created code scanning configuration.
Optionally, select a query suite in the “Query suites” section.
Extended query suite runs additional, lower severity and precision queries.
Enable CodeQL: Review settings and click “Enable CodeQL” to trigger a workflow.
View Configuration: After enablement, view the configuration by selecting the relevant choice.
CodeQL Analysis Run: Once CodeQL is set up, it will run on the repository to check for vulnerabilities in the supported language code. You can view more information by clicking on the “View last scan” option.
View Security Alerts: It will run its default built-in queries on the repository code for the supported language (in this case, Python) and will generate alerts for any detected vulnerabilities.
Reference Link for more insights –
https://docs.github.com/en/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning
https://docs.github.com/en/code-security/code-scanning/managing-your-code-scanning-configuration/python-built-in-queries
Benefits of Running Code QL for Developers
Responsibilities and Burdens
Initial Setup and Learning Curve: Requires time to set up and learn how to use effectively.
Maintenance of Queries: Custom queries may need updates as the codebase evolves.
False Positives: May generate false positives that need to be reviewed and addressed.
Integration Effort: Integrating Code QL into existing CI/CD pipelines can require significant effort.
Microsoft Tech Community – Latest Blogs –Read More
blazor reading paramater from GET string
I have a blazor page
<h1>Hello, world!</h1>
<h2>The time on the server is @DateTime.Now LAT:@Request.Query[“lat”] LONG:@Request.Query[“long”]</h2>
where I am trying to read parameters but the error I get is
I also tried
<h2>The time on the server is @DateTime.Now LAT:@Context.Request.Query[“lat”] LONG:@Context.Request.Query[“long”]</h2>
and
<h2>The time on the server is @DateTime.Now LAT:@HttpContext.Current.Request.Query[“lat”]
LONG:@HttpContext.Current.Request.Query[“long”]</h2>
but the IDE doesnt understand Context/ or HttpContext.Current
I have a blazor page @PAGE
<h1>Hello, world!</h1>
<h2>The time on the server is @DateTime.Now LAT:@Request.Query[“lat”] LONG:@Request.Query[“long”]</h2>where I am trying to read parameters but the error I get is I also tried<h2>The time on the server is @DateTime.Now LAT:@Context.Request.Query[“lat”] LONG:@Context.Request.Query[“long”]</h2>
and
<h2>The time on the server is @DateTime.Now LAT:@HttpContext.Current.Request.Query[“lat”]
LONG:@HttpContext.Current.Request.Query[“long”]</h2>but the IDE doesnt understand Context/ or HttpContext.Current Read More
Planner Premium calculation is strange
Hi,
I encountered a strange behavior of the <% complete> field in the premium planner when you change the <duration> and/or <effort> fields after you started a task, e.g.
Set <start date>=today, <duration>=20 days, <effort>=40hours, <% complete>=50%Then change <duration> and <effort> to accommodate for unexpected delay/scope extension
Doing this, you will end up with a completely broken <% complete> value as shown in the image below.
It seems that the <duration> field is somehow integrated in the calculation of <% complete>, which I feel is odd.
Happy to get some feedback
Lasse
Hi,I encountered a strange behavior of the <% complete> field in the premium planner when you change the <duration> and/or <effort> fields after you started a task, e.g.Set <start date>=today, <duration>=20 days, <effort>=40hours, <% complete>=50%Then change <duration> and <effort> to accommodate for unexpected delay/scope extensionDoing this, you will end up with a completely broken <% complete> value as shown in the image below.It seems that the <duration> field is somehow integrated in the calculation of <% complete>, which I feel is odd.Happy to get some feedbackLasse Read More
<img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/590617iDF2BA16A92779068/image-size/large?v=v2&px=999" title="Planner Premium calculation is strange” />
Does Microsoft discriminate against health care providers?
At Health IT we specialise in looking after doctors in private practice. If you are in Queensland we probably look after your local GP and your local specialist. Almost all of our customers are the very definition of a small business.
For many years we’ve been driving technology forward for these customers on the Microsoft platform. Until now we’ve been official partners which means we know what we are doing and have some access to Microsoft to help solve our customer’s problems.
To be eligible to be a current Microsoft partner we have to have some certifications and prove some growth. Our growth is well and truly above their requirements EXCEPT, they only count customers with seats between 11-300. Your local doctor has an average seat count of 7.5, and we look after more than 300 of these customers.
An MSP half our size but without our specialty would easily qualify to be a Microsoft partner. But because we focus on and work almost exclusively for doctors, we can’t be. Although we’re growing much faster than they require, they don’t count our growth.
I have taken this up with Partner support and obviously they can’t change the rules, as unfair as they may be. How can we get some common sense applied to this problem, or are Microsoft happy to discriminate against the most important industry in the country?
At Health IT we specialise in looking after doctors in private practice. If you are in Queensland we probably look after your local GP and your local specialist. Almost all of our customers are the very definition of a small business.For many years we’ve been driving technology forward for these customers on the Microsoft platform. Until now we’ve been official partners which means we know what we are doing and have some access to Microsoft to help solve our customer’s problems.To be eligible to be a current Microsoft partner we have to have some certifications and prove some growth. Our growth is well and truly above their requirements EXCEPT, they only count customers with seats between 11-300. Your local doctor has an average seat count of 7.5, and we look after more than 300 of these customers.An MSP half our size but without our specialty would easily qualify to be a Microsoft partner. But because we focus on and work almost exclusively for doctors, we can’t be. Although we’re growing much faster than they require, they don’t count our growth.I have taken this up with Partner support and obviously they can’t change the rules, as unfair as they may be. How can we get some common sense applied to this problem, or are Microsoft happy to discriminate against the most important industry in the country? Read More
修改信箱安全信策略,備用信箱無法收到驗證碼
註冊五個outlook信箱使用,分別都已經設定備用信箱了,但近期要修改新增備用信箱,在輸入原先的備用信箱收取驗證碼時,出現『我們無法傳送驗證碼。請再試一次。』,已經測試很多次都出現這訊息,導致我無法進入安全策略裡面設定,請問這是什麼原因?以及如何排除問題?
PS. 有確認原先的備用信箱輸入是正確的
註冊五個outlook信箱使用,分別都已經設定備用信箱了,但近期要修改新增備用信箱,在輸入原先的備用信箱收取驗證碼時,出現『我們無法傳送驗證碼。請再試一次。』,已經測試很多次都出現這訊息,導致我無法進入安全策略裡面設定,請問這是什麼原因?以及如何排除問題?PS. 有確認原先的備用信箱輸入是正確的 Read More
How to Convert OST to PST Free?
Download Advik OST to PST Converter software for Windows. This tool will convert OST file to PST with same folder structure. Thus, no data loss will take place. It will export all emails, calendar, contacts, notes, etc data from .ost into .pst format.
YOu can download the software and try it for free.
Steps to Convert OST to PST
Launch Advik OST to PST Converter in your PC.Click Select Files and add Add OST file in software.Select mailbox folders and click Next.Choose PST as saving option.Click Convert button.
Done! The software will start converting OST file in PST format automatically.
Download Advik OST to PST Converter software for Windows. This tool will convert OST file to PST with same folder structure. Thus, no data loss will take place. It will export all emails, calendar, contacts, notes, etc data from .ost into .pst format. YOu can download the software and try it for free. Steps to Convert OST to PSTLaunch Advik OST to PST Converter in your PC.Click Select Files and add Add OST file in software.Select mailbox folders and click Next.Choose PST as saving option.Click Convert button.Done! The software will start converting OST file in PST format automatically. Read More
Azure Chaos Studio supports new fault for Azure Event Hub
Azure Chaos Studio supports new fault for Azure Event Hubs.
Azure Chaos Studio is a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. Chaos engineering is a methodology by which you inject real-world faults into your application to run controlled fault injection experiments.
Azure Chaos Studio has added a new fault action for Azure Event Hubs called Change Event Hub State.
This fault action lets users disable entities within a targeted Azure Event Hubs namespace either partially or fully to test messaging infrastructure for maintenance or failure scenarios for an application dependent on an Event Hub.
The fault can be used in the Azure portal by designing experiments, deploying templates, or using the REST API. The fault library contains more information and examples.
This article will cover the how-to setup the fault action in Azure Chaos Studio for Azure Event Hub called Change Event Hub State.
Create Event Hubs namespace
Step 1: Go to Azure Portal – https://portal.azure.com/ ; Login with your userId and password.
Step 2: Click on Create a resource and then select Event Hubs.
Step 3: Click on Create event hubs namespace.
Step 4: Click on Review + Create.
Step 5: Click on Create.
Step 6: Click on Go to resource.
Create Event Hub
Step 1: Now create the Event Hub.
Step 2: Click on Event Hub
Step 3: Provide a suitable name to the event Hub. Then Click on review & Create.
Step 4: Click on Create.
The Event Hub is created.
Chaos Studio
Step 1: Now Create Chaos Studio
Step 2: Click on Target
Step 3: You will be able to view the Event Hub namespace created by earlier.
Step 4: Select on Eventhubnamespace Created and click on “Enable targets”.
Step 5: Click on Review+ Enable
Step 6: Click on Enable
Step 7: Click on Go to Resource
Step 7: Go to Chaos Studio, by searching Chaos Studio in the Search bar.
Step 8: Click on Create.
Step 9: Provide a suitable name to the experiment. Click on Experiment Designer.
Step 10: Add the Action.
Step 11: Firstly, add the fault to disable the Azure Event Hub.
Step 12: In Faults dropdown select the Change Event Hub State.
Chage the event hub state to “Disable”.
Step 13: Click on Target Resources.
Step 14: On Target Resources Select the radio button “Manually select from a list”. Select your Event hub namespace. And Click on Add.
Step 15: Click on Add Delay.
Then change the Duration to the desired delay. In this case, I have added a 1-minute delay. Click on Add.
This means that when this experiment runs, it will first disable the Event Hub for the duration of 1 minute.
In the next step, we will change the Event Hub State back to Active.
Step 16: Now again add the fault and select the Change Event Hub state, like you did in Step 11.
Step 17: Now set the desiredState as Active.
Step 18: Click on Target Resources and select the Event Hub namespace like you did in previous step and click on Add.
Step 19: Click on Review and Create.
Step 20: Click on Create.
Step 21: Click on Go to resource.
Step 22: Now click on Identity.
Step 23: Click on Azure Add role assignments. Change the role to Azure Event Hub Data Owner and Save it.
Step 24: Click on Overview. The status will change to Running after approximately a min.
Step 25: Once the state is running. Go to your Event Hub. You will notice that state is disabled.
Step 26: As we have added the delay of 1 min in our experiment setup earlier, the event hub state change to Active after a minute.
Microsoft Tech Community – Latest Blogs –Read More
Listing select data from one tab in another
I am trying to find a formula to list select data from one tab into tab sheet on the same document.
For example, say Tab 1 had the following information listed in Column A:
Apples AOranges AOranges BApples B
I want Tab 2 to automatically find all instances where Apples appear in the list and then list them like this:
Apples AApples B
At the moment, all I can do is a formula to only show where Apples are listed, but the formula leaves lots of blank spaces in between the listings, so it looks like this:
Apples A Apples B
Is there any way I can just get specific data from one tab to be listed in another tab with no gaps in rows?
I am trying to find a formula to list select data from one tab into tab sheet on the same document. For example, say Tab 1 had the following information listed in Column A:Apples AOranges AOranges BApples B I want Tab 2 to automatically find all instances where Apples appear in the list and then list them like this:Apples AApples B At the moment, all I can do is a formula to only show where Apples are listed, but the formula leaves lots of blank spaces in between the listings, so it looks like this:Apples A Apples B Is there any way I can just get specific data from one tab to be listed in another tab with no gaps in rows? Read More
Why is XLOOKUP not returning the value on one sheet but does on another?
Attached is the spreadsheet I am working on. Very simple request. base_data contains a product SKU, fineline code and a few others. The worksheet product-heliumSKU uses a LET statement to take data from base_data and using the fineline code, use it to look up in another spreadsheet (fineline-heliumSKU) to obtain the helium SKU. But alas, it seems to not find the helium SKU, yet the same XLOOKUP does return the helium SKU (see fineline-heliumSKU). There has to be something wrong with the XLOOKUP in the LET statement, just stumped.
Attached is the spreadsheet I am working on. Very simple request. base_data contains a product SKU, fineline code and a few others. The worksheet product-heliumSKU uses a LET statement to take data from base_data and using the fineline code, use it to look up in another spreadsheet (fineline-heliumSKU) to obtain the helium SKU. But alas, it seems to not find the helium SKU, yet the same XLOOKUP does return the helium SKU (see fineline-heliumSKU). There has to be something wrong with the XLOOKUP in the LET statement, just stumped. Read More
Can i dynamically filter a list on a page to match content to the page title
Hi Folks,
I have a Sharepoint list (Risk_links) with groupings that provides links to areas of risk.
I have an index page
I also have a page for EACH risk category that provides guidance on that topic.
On each topic page I have hosted the ‘list web part’ that links to RIsk_links.
~~
What I am hoping to achieve is to have each topic page filter the Risk_links list to only show the same items with the same category as the page title
E.G.
Topic page = Asbestos
filter the Risk_links list to only show ‘Asbestos’ rows.
~
I could create a list in each page, and manually upload the relevant content, what i’d like though is the one master list that is dynamically filtered so i can keep one thing current rather than having to keep many things current.
This would also help me create a page template so that new topics could be added fairly seamlessly.
~
Is there a setting (or some JSON) that can help the list in each topic page to dynamically filter the list as desired?
Thanks
Hi Folks, I have a Sharepoint list (Risk_links) with groupings that provides links to areas of risk. I have an index pageI also have a page for EACH risk category that provides guidance on that topic. On each topic page I have hosted the ‘list web part’ that links to RIsk_links. ~~ What I am hoping to achieve is to have each topic page filter the Risk_links list to only show the same items with the same category as the page title E.G. Topic page = Asbestosfilter the Risk_links list to only show ‘Asbestos’ rows. ~ I could create a list in each page, and manually upload the relevant content, what i’d like though is the one master list that is dynamically filtered so i can keep one thing current rather than having to keep many things current. This would also help me create a page template so that new topics could be added fairly seamlessly. ~ Is there a setting (or some JSON) that can help the list in each topic page to dynamically filter the list as desired? Thanks Read More
signature choice for message does not appear in Outlook setup for editing, deleting, etc
When I am composing a message, I have three signature options for a message. One is no longer needed, but it is not listed in the Outlook/options/signatures editing area. Please advise – where is this unneeded signature stored?
When I am composing a message, I have three signature options for a message. One is no longer needed, but it is not listed in the Outlook/options/signatures editing area. Please advise – where is this unneeded signature stored? Read More
Microsoft Cloud and Hosting Partner Online Meeting e154 | Highlights from Build, News, Webinars ..
Join us in 20 minutes for Episode 154 in the Microsoft Cloud and Hosting Partner Online Meeting series. This month we summarize recent news and announcements around CSP, Incentives and Partner Center; look at Highlights from Build; Summarize some of the recent industry news and webinars; and consider some of the forthcoming Sales, Pre-sales and Technical Training.
Content for the Online Meeting is pre-published here.
See you online soon.
Regards, Phil
Join us in 20 minutes for Episode 154 in the Microsoft Cloud and Hosting Partner Online Meeting series. This month we summarize recent news and announcements around CSP, Incentives and Partner Center; look at Highlights from Build; Summarize some of the recent industry news and webinars; and consider some of the forthcoming Sales, Pre-sales and Technical Training.
Content for the Online Meeting is pre-published here.
See you online soon.
Regards, Phil
Read More