Category: News
Retrieve deleted videos from Stream(classic)
Hello,
Our team had a few meeting recordings which were saved on Microsoft Stream (Classic). Recently when we try to access those videos, it appears not accessible. We realize that the Stream (Classic) has been retired since 15 April. Is there any way to retrieve the videos? Please advise.
Regards,
Manoj
Hello, Our team had a few meeting recordings which were saved on Microsoft Stream (Classic). Recently when we try to access those videos, it appears not accessible. We realize that the Stream (Classic) has been retired since 15 April. Is there any way to retrieve the videos? Please advise. Regards,Manoj Read More
Office Script to add image
Hi! I am using the following code to add image into excel sheet automatically. However, it only works for JPG file and not for JPEG and PNG file. May I modify the following code and let it works for 3 types of image files? Thank you very much.
function main(workbook: ExcelScript.Workbook, base64ImageString: string,imageName: string) {
// get table row count
let sheet1 = workbook.getWorksheet(‘In progress’);
let table1 = workbook.getTable(‘Inprogress’);
let rowCount = table1.getRowCount();
let n1:number = rowCount + 1;
let range1:string = ”;
let imageAddress:string = ”;
let nameAddress:string = ”;
imageAddress = ‘J’ + n1.toString();
nameAddress = ‘N’ + n1.toString();
range1 = n1 + ‘:’ + n1
sheet1.getRange(range1).getFormat().setRowHeight(230);
let range = sheet1.getRange(imageAddress);
let image = sheet1.addImage(base64ImageString);
image.setName(imageName);
image.setTop(range.getTop());
image.setLeft(range.getLeft());
image.setWidth(300);
image.setHeight(225);
sheet1.getRange(nameAddress).setValue(imageName)
}
Hi! I am using the following code to add image into excel sheet automatically. However, it only works for JPG file and not for JPEG and PNG file. May I modify the following code and let it works for 3 types of image files? Thank you very much. function main(workbook: ExcelScript.Workbook, base64ImageString: string,imageName: string) {
// get table row count
let sheet1 = workbook.getWorksheet(‘In progress’);
let table1 = workbook.getTable(‘Inprogress’);
let rowCount = table1.getRowCount();
let n1:number = rowCount + 1;
let range1:string = ”;
let imageAddress:string = ”;
let nameAddress:string = ”;
imageAddress = ‘J’ + n1.toString();
nameAddress = ‘N’ + n1.toString();
range1 = n1 + ‘:’ + n1
sheet1.getRange(range1).getFormat().setRowHeight(230);
let range = sheet1.getRange(imageAddress);
let image = sheet1.addImage(base64ImageString);
image.setName(imageName);
image.setTop(range.getTop());
image.setLeft(range.getLeft());
image.setWidth(300);
image.setHeight(225);
sheet1.getRange(nameAddress).setValue(imageName)
} Read More
RAG on structured data with PostgreSQL
RAG (Retrieval Augmented Generation) is one of the most promising uses for large language models. Instead of asking an LLM a question and hoping the answer lies somewhere in its weights, we instead first query a knowledge base for anything relevant to the question, and then feed both those results and the original question to the LLM.
We have many RAG solutions out there for asking questions on unstructured documents, like PDFs and Word Documents. Our most popular Azure solution for this scenario includes a data ingestion process to extract the text from the documents, chunk them up into appropriate sizes, and store them in an Azure AI Search index. When your RAG is on unstructured documents, you’ll always need a data ingestion step to store them in an LLM-compatible format.
But what if you just want users to ask questions about structured data, like a table in a database? Imagine customers that want to ask questions about the products in a store’s inventory, and each product is a row in the table. We can use the RAG approach there, too, and in some ways, it’s a simpler process.
To get you started with this flavor of RAG, we’ve created a new RAG-on-PostgreSQL solution that includes a FastAPI backend, React frontend, and infrastructure-as-code for deploying it all to Azure Container Apps with Azure PostgreSQL Flexible Server. Here it is with the sample seed data:
We use the user’s question to query a single PostgreSQL table and send the matching rows to the LLM. We display the answer plus information about any of the referenced products from the answer. Now let’s break down how that solution works.
Data preparation
When we eventually query the database table with the user’s query, we ideally want to perform a hybrid search: both a full text search and a vector search of any columns that might match the user’s intent. In order to perform a vector search, we also need a column that stores a vector embedding of the target columns.
This is what the sample table looks like, described using SQLAlchemy 2.0 model classes. The final embedding column is a Vector type, from the pgvector extension for PostgreSQl:
class Item(Base):
__tablename__ = “items”
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
type: Mapped[str] = mapped_column()
brand: Mapped[str] = mapped_column()
name: Mapped[str] = mapped_column()
description: Mapped[str] = mapped_column()
price: Mapped[float] = mapped_column()
embedding: Mapped[Vector] = mapped_column(Vector(1536))
The embedding column has 1536 dimensions to match OpenAI’s text-embedding-ada-002 model, but you could configure it to match the dimensions of different embedding models instead. The most important thing is to know exactly which model you used for generating embeddings, so then we can later search with that same model.
To compute the value of the embedding column, we concatenate the text columns from the table row, send them to the OpenAI embedding model, and store the result:
items = session.scalars(select(Item)).all()
for item in items:
item_for_embedding = f”Name: {self.name} Description: {self.description} Type: {self.type}”
item.embedding = openai_client.embeddings.create(
model=EMBED_DEPLOYMENT,
input=item_for_embedding
).data[0].embedding
session.commit()
We only need to run that once, if our data is static. However, if any of the included columns change, we should re-run that for the changed rows. Another approach is to use the Azure AI extension for Azure PostgreSQL Flexible Server. I didn’t use it in my solution since I also wanted it to run with a local PostgreSQL server, but it should work great if you’re always using the Azure-hosted PostgreSQL Flexible Server.
Hybrid search in PostgreSQL
Now our database table has both text columns and a vector column, so we should be able to perform a hybrid search: using the pgvector distance operator on the embedding column, using the built-in full-text search functions on the text columns, and merging them using the Reciprocal-Rank Fusion algorithm.
We use this SQL query for hybrid search, inspired by an example from the pgvector-python repository:
vector_query = f”””
SELECT id, RANK () OVER (ORDER BY embedding <=> :embedding) AS rank
FROM items
ORDER BY embedding <=> :embedding
LIMIT 20
“””
fulltext_query = f”””
SELECT id, RANK () OVER (ORDER BY ts_rank_cd(to_tsvector(‘english’, description), query) DESC)
FROM items, plainto_tsquery(‘english’, :query) query
WHERE to_tsvector(‘english’, description) @@ query
ORDER BY ts_rank_cd(to_tsvector(‘english’, description), query) DESC
LIMIT 20
“””
hybrid_query = f”””
WITH vector_search AS (
{vector_query}
),
fulltext_search AS (
{fulltext_query}
)
SELECT
COALESCE(vector_search.id, fulltext_search.id) AS id,
COALESCE(1.0 / (:k + vector_search.rank), 0.0) +
COALESCE(1.0 / (:k + fulltext_search.rank), 0.0) AS score
FROM vector_search
FULL OUTER JOIN fulltext_search ON vector_search.id = fulltext_search.id
ORDER BY score DESC
LIMIT 20
“””
results = session.execute(sql,
{“embedding”: to_db(query_vector), “query”: query_text, “k”: 60},
).fetchall()
That hybrid search is missing the final step that we always recommend for Azure AI Search: semantic ranker, a re-ranking model that sorts the results according to the original user queries. It should be possible to add a re-ranking model, as shown in another pgvector-python example, but such an addition requires loadtesting and possibly an architectural change, since re-ranking models are CPU-intensive. Ideally, the re-ranking model would be deployed on dedicated infrastructure optimized for model running, not on the same server as our app backend.
We get fairly good results from that hybrid search query, however! It easily finds rows that both match the exact keywords in a query and semantically similar phrases, as demonstrated by these user questions:
Function calling for SQL filtering
The next step is to handle user queries like, “climbing gear cheaper than $100.” Our hybrid search query can definitely find “climbing gear”, but it’s not designed to find products whose price is lower than some amount. The hybrid search isn’t querying the price column at all, and isn’t appropriate for a numeric comparison query anyway. Ideally, we would do both a hybrid search and add a filter clause, like WHERE price < 100.
Fortunately, we can use an LLM to suggest filter clauses based on user queries, and the OpenAI GPT models are very good at it. We add a query-rewriting phase to our RAG flow which uses OpenAI function calling to come up with the optimal search query and column filters.
In order to use OpenAI function calling, we need to describe the function and its parameters. Here’s what that looks like for a search query and single column’s filter clause:
{
“type”: “function”,
“function”: {
“name”: “search_database”,
“description”: “Search PostgreSQL database for relevant products based on user query”,
“parameters”: {
“type”: “object”,
“properties”: {
“search_query”: {
“type”: “string”,
“description”: “Query string to use for full text search, e.g. ‘red shoes'”
},
“price_filter”: {
“type”: “object”,
“description”: “Filter search results based on price of the product”,
“properties”: {
“comparison_operator”: {
“type”: “string”,
“description”: “Operator to compare the column value, either ‘>’, ‘<‘, ‘>=’, ‘<=’, ‘='”
},
“value”: {
“type”: “number”,
“description”: “Value to compare against, e.g. 30”
}
}
}
}
}
}
}
We can easily add additional parameters for other column filters, or we could even have a generic column filter parameter and have OpenAI suggest the column based on the table schema. For my solution, I am intentionally constraining the LLM to only suggest a subset of possible filters, to minimize risk of SQL injection or poor SQL performance. There are many libraries out there that do full text-to-SQL, and that’s another approach you could try out, if you’re comfortable with the security of those approaches.
When we get back the results from the function call, we use it to build a filter clause, and append that to our original hybrid search query. We want to do the filtering before the vector and full text search, to narrow down the search space to only what could possibly match. Here’s what the new vector search looks like, with the additional filter clause:
vector_query = f”””
SELECT id, RANK () OVER (ORDER BY embedding <=> :embedding) AS rank
FROM items
{filter_clause}
ORDER BY embedding <=> :embedding
LIMIT 20
“””
With the query rewriting and filter building in place, our RAG app can now answer questions that depend on filters:
RAG on unstructured vs structured data
Trying to decide what RAG approach to use, or which of our solutions to use for a prototype? If your target data is largely unstructured documents, then you should try out our Azure AI Search RAG starter solution which will take care of the complex data ingestion phase for you. However, if your target data is an existing database table, and you want to RAG over a single table (or a small number of tables), the try out the PostgreSQL RAG starter solution and modify it to work with your table schema. If your target data is a database with a multitude of tables with different schemas, then you probably want to research full text-to-SQL solutions. Also check out the llamaindex and langchain libraries, as they often have functionality and samples for common RAG scenarios.
Microsoft Tech Community – Latest Blogs –Read More
Boosting Code Security with GHAS Code Scanning in Azure DevOps & GitHub
Code scanning, a pipeline-based tool available in GitHub Advanced Security, is designed to detect code vulnerabilities and bugs within the source code of ADO (Azure DevOps) repositories. Utilizing CodeQL as a static analysis tool, it performs query analysis and variant analysis. When vulnerabilities are found, it generates security alerts.
CodeQL
CodeQL is a powerful static analysis tool used for showing vulnerabilities and bugs in source code. It enables developers to write custom queries that analyze codebases, searching for specific patterns and potential security issues. By converting code into a database format, CodeQL allows for sophisticated, database-like queries to detect flaws.
CodeQL in Action
1. Preparing the Code
Create a CodeQL Database: Extract and structure the code into a database for analysis.
2. Running CodeQL Queries
Execute Queries: Run predefined or custom queries against the database to find potential issues.
3. Interpreting the Query Results
Review Findings: Analyze the results to find, prioritize, and address vulnerabilities and code quality issues.
Reference: – About the CodeQL CLI – GitHub Docs
Sample Code Scanning Azure DevOps Pipeline
Once the GitHub Advanced security is configured for the ADO Repo we can then create and run a dedicated Code scanning pipeline to detect vulnerability & generate query results & alerts.
Below is a generic sample Code scanning pipeline
Prerequisites: –
GitHub Token (GitHub token): Required Pipeline Variable for authenticated operations with GitHub.
CodeQL Results File Path (codeql_results_file): Predefined in the pipeline YAML variable to specify where the analysis results are stored.
SARIF SAST Scans Tab extension: Need to install it from Azure DevOps Marketplace to view query results
# Author: Debjyoti
# This pipeline uses default CodeQL queries for code scanning
trigger: none
pool:
vmImage: ‘windows-latest’
variables:
codeql_results_file: ‘$(Build.ArtifactStagingDirectory)/results.sarif’
steps:
– task: AdvancedSecurity-Codeql-Init@1
displayName: ‘Initialize CodeQL’
inputs:
languages: ‘python’
loglevel: ‘2’
env:
GITHUB_TOKEN: $(githubtoken)
– task: AdvancedSecurity-Codeql-Autobuild@1
displayName: ‘AutoBuild’
– task: AdvancedSecurity-Codeql-Analyze@1
displayName: ‘Perform CodeQL Analysis’
inputs:
outputFile: ‘$(codeql_results_file)’
– task: PublishBuildArtifacts@1
displayName: ‘Publish CodeQL Results’
inputs:
pathToPublish: ‘$(codeql_results_file)’
artifactName: ‘CodeQLResults’
For further insights and detailed guides, please refer to the following articles:
Default setup of Code Scanning in GitHub Repository
Requirements for Using Default Setup
GitHub Actions: Must be enabled.
Recommendations
Enable default setup if there is any chance of including at least one CodeQL-supported language in the future.
Default setup will not run or use GitHub Actions minutes if no CodeQL-supported languages are present.
If CodeQL-supported languages are added, default setup will automatically begin scanning and using minutes.
Customizing Default Setup
Start with default setup.
Evaluate code scanning performance.
Customize if needed to better meet security needs.
Configuring Default Setup for a Repository
Automatic Analysis: All CodeQL-supported languages will be analyzed.
Successful Analysis: Languages analyzed successfully will be retained.
Unsuccessful Analysis: Languages not analyzed successfully will be deselected.
Failure Handling: If all analyses fail, default setup stays enabled but inactive until a supported language is added, or setup is manually reconfigured.
Steps to Enable Default Setup
Navigate to Repository: Go to the main page of the repository.
Access Settings:
Click on “Settings” under the repository name.
If “Settings” is not visible, select the dropdown menu and click “Settings”.
Security Settings:
In the “Security” section of the sidebar, click “Code security and analysis”.
4. Setup Code Scanning: In the “Code scanning” section, select “Set up” and click “Default”.
Review Configuration:
A dialog will summarize the automatically created code scanning configuration.
Optionally, select a query suite in the “Query suites” section.
Extended query suite runs additional, lower severity and precision queries.
Enable CodeQL: Review settings and click “Enable CodeQL” to trigger a workflow.
View Configuration: After enablement, view the configuration by selecting the relevant choice.
CodeQL Analysis Run: Once CodeQL is set up, it will run on the repository to check for vulnerabilities in the supported language code. You can view more information by clicking on the “View last scan” option.
View Security Alerts: It will run its default built-in queries on the repository code for the supported language (in this case, Python) and will generate alerts for any detected vulnerabilities.
Reference Link for more insights –
https://docs.github.com/en/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning
https://docs.github.com/en/code-security/code-scanning/managing-your-code-scanning-configuration/python-built-in-queries
Benefits of Running Code QL for Developers
Responsibilities and Burdens
Initial Setup and Learning Curve: Requires time to set up and learn how to use effectively.
Maintenance of Queries: Custom queries may need updates as the codebase evolves.
False Positives: May generate false positives that need to be reviewed and addressed.
Integration Effort: Integrating Code QL into existing CI/CD pipelines can require significant effort.
Microsoft Tech Community – Latest Blogs –Read More
How to implement Steer by wire system model in simulink
How to implement Steer by wire system model in simulink? Please help me with this.How to implement Steer by wire system model in simulink? Please help me with this. How to implement Steer by wire system model in simulink? Please help me with this. simulink, simulation, steer by wire, model, output MATLAB Answers — New Questions
Can “pipe(G)” block be used to model forced air convection?
Hello,
i am trying to model a battery pack with forced air convection cooling. The pipe(G) block can be used to transfer heat from gas flowing inside a pipe to intended surface geometry. But it is internal flow. My concern is i want to model heat transfer from the surface to the gas(air). But that will become external flow problem. Is it possible to do so?
please help!
thanks.Hello,
i am trying to model a battery pack with forced air convection cooling. The pipe(G) block can be used to transfer heat from gas flowing inside a pipe to intended surface geometry. But it is internal flow. My concern is i want to model heat transfer from the surface to the gas(air). But that will become external flow problem. Is it possible to do so?
please help!
thanks. Hello,
i am trying to model a battery pack with forced air convection cooling. The pipe(G) block can be used to transfer heat from gas flowing inside a pipe to intended surface geometry. But it is internal flow. My concern is i want to model heat transfer from the surface to the gas(air). But that will become external flow problem. Is it possible to do so?
please help!
thanks. heat transfer, thermal modeling, forced convection MATLAB Answers — New Questions
find all pathbetween 2 node under constraint
hey
i want to find all path between A to B ( start from left buttom)
i have tryed to use meshgrid and allpath command without succses
the constraint are the i can go right or up only
also, i need to write code that calculates the probability on each node to go up or right if given that all route is equal probability
i will appreciate hint how to write my code and which commands to use
here is the meshhey
i want to find all path between A to B ( start from left buttom)
i have tryed to use meshgrid and allpath command without succses
the constraint are the i can go right or up only
also, i need to write code that calculates the probability on each node to go up or right if given that all route is equal probability
i will appreciate hint how to write my code and which commands to use
here is the mesh hey
i want to find all path between A to B ( start from left buttom)
i have tryed to use meshgrid and allpath command without succses
the constraint are the i can go right or up only
also, i need to write code that calculates the probability on each node to go up or right if given that all route is equal probability
i will appreciate hint how to write my code and which commands to use
here is the mesh node, allphath MATLAB Answers — New Questions
How can I add a mouse movement listener to a custom ui component?
Hello everyone,
I have created a MatLab app that uses mousemovement events. It uses the WindowButtonMotion callback of the UIFigure that was described here https://nl.mathworks.com/matlabcentral/answers/775147-how-to-create-mouse-movement-event-on-uiaxes-in-app-designer-to-catch-cursor-location-on-the-axes in the answer (Method 2) by Adam Danz
The app works nicely. However, now that the project is evolving, I would like to convert the App to a Custom UI Component. But MatLab components do not seem to include a UIFigure, and therefore no WindowsButtonMotion event/callback. The Custom UI Component itself (comp) and UIAxes do not have a WindowButtonMotion event.
What I would like is a mousemovement event/callback/listener that calls a function so that I can check the mouse position against criteria and take appropriate action. This seems like very common functionality, but I cannot figure out how to accomplish this in a Custom UI Component.
Any ideas?Hello everyone,
I have created a MatLab app that uses mousemovement events. It uses the WindowButtonMotion callback of the UIFigure that was described here https://nl.mathworks.com/matlabcentral/answers/775147-how-to-create-mouse-movement-event-on-uiaxes-in-app-designer-to-catch-cursor-location-on-the-axes in the answer (Method 2) by Adam Danz
The app works nicely. However, now that the project is evolving, I would like to convert the App to a Custom UI Component. But MatLab components do not seem to include a UIFigure, and therefore no WindowsButtonMotion event/callback. The Custom UI Component itself (comp) and UIAxes do not have a WindowButtonMotion event.
What I would like is a mousemovement event/callback/listener that calls a function so that I can check the mouse position against criteria and take appropriate action. This seems like very common functionality, but I cannot figure out how to accomplish this in a Custom UI Component.
Any ideas? Hello everyone,
I have created a MatLab app that uses mousemovement events. It uses the WindowButtonMotion callback of the UIFigure that was described here https://nl.mathworks.com/matlabcentral/answers/775147-how-to-create-mouse-movement-event-on-uiaxes-in-app-designer-to-catch-cursor-location-on-the-axes in the answer (Method 2) by Adam Danz
The app works nicely. However, now that the project is evolving, I would like to convert the App to a Custom UI Component. But MatLab components do not seem to include a UIFigure, and therefore no WindowsButtonMotion event/callback. The Custom UI Component itself (comp) and UIAxes do not have a WindowButtonMotion event.
What I would like is a mousemovement event/callback/listener that calls a function so that I can check the mouse position against criteria and take appropriate action. This seems like very common functionality, but I cannot figure out how to accomplish this in a Custom UI Component.
Any ideas? windowbuttonmotion, custom ui component, app designer MATLAB Answers — New Questions
blazor reading paramater from GET string
I have a blazor page
<h1>Hello, world!</h1>
<h2>The time on the server is @DateTime.Now LAT:@Request.Query[“lat”] LONG:@Request.Query[“long”]</h2>
where I am trying to read parameters but the error I get is
I also tried
<h2>The time on the server is @DateTime.Now LAT:@Context.Request.Query[“lat”] LONG:@Context.Request.Query[“long”]</h2>
and
<h2>The time on the server is @DateTime.Now LAT:@HttpContext.Current.Request.Query[“lat”]
LONG:@HttpContext.Current.Request.Query[“long”]</h2>
but the IDE doesnt understand Context/ or HttpContext.Current
I have a blazor page @PAGE
<h1>Hello, world!</h1>
<h2>The time on the server is @DateTime.Now LAT:@Request.Query[“lat”] LONG:@Request.Query[“long”]</h2>where I am trying to read parameters but the error I get is I also tried<h2>The time on the server is @DateTime.Now LAT:@Context.Request.Query[“lat”] LONG:@Context.Request.Query[“long”]</h2>
and
<h2>The time on the server is @DateTime.Now LAT:@HttpContext.Current.Request.Query[“lat”]
LONG:@HttpContext.Current.Request.Query[“long”]</h2>but the IDE doesnt understand Context/ or HttpContext.Current Read More
Planner Premium calculation is strange
Hi,
I encountered a strange behavior of the <% complete> field in the premium planner when you change the <duration> and/or <effort> fields after you started a task, e.g.
Set <start date>=today, <duration>=20 days, <effort>=40hours, <% complete>=50%Then change <duration> and <effort> to accommodate for unexpected delay/scope extension
Doing this, you will end up with a completely broken <% complete> value as shown in the image below.
It seems that the <duration> field is somehow integrated in the calculation of <% complete>, which I feel is odd.
Happy to get some feedback
Lasse
Hi,I encountered a strange behavior of the <% complete> field in the premium planner when you change the <duration> and/or <effort> fields after you started a task, e.g.Set <start date>=today, <duration>=20 days, <effort>=40hours, <% complete>=50%Then change <duration> and <effort> to accommodate for unexpected delay/scope extensionDoing this, you will end up with a completely broken <% complete> value as shown in the image below.It seems that the <duration> field is somehow integrated in the calculation of <% complete>, which I feel is odd.Happy to get some feedbackLasse Read More
<img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/590617iDF2BA16A92779068/image-size/large?v=v2&px=999" title="Planner Premium calculation is strange” />
Does Microsoft discriminate against health care providers?
At Health IT we specialise in looking after doctors in private practice. If you are in Queensland we probably look after your local GP and your local specialist. Almost all of our customers are the very definition of a small business.
For many years we’ve been driving technology forward for these customers on the Microsoft platform. Until now we’ve been official partners which means we know what we are doing and have some access to Microsoft to help solve our customer’s problems.
To be eligible to be a current Microsoft partner we have to have some certifications and prove some growth. Our growth is well and truly above their requirements EXCEPT, they only count customers with seats between 11-300. Your local doctor has an average seat count of 7.5, and we look after more than 300 of these customers.
An MSP half our size but without our specialty would easily qualify to be a Microsoft partner. But because we focus on and work almost exclusively for doctors, we can’t be. Although we’re growing much faster than they require, they don’t count our growth.
I have taken this up with Partner support and obviously they can’t change the rules, as unfair as they may be. How can we get some common sense applied to this problem, or are Microsoft happy to discriminate against the most important industry in the country?
At Health IT we specialise in looking after doctors in private practice. If you are in Queensland we probably look after your local GP and your local specialist. Almost all of our customers are the very definition of a small business.For many years we’ve been driving technology forward for these customers on the Microsoft platform. Until now we’ve been official partners which means we know what we are doing and have some access to Microsoft to help solve our customer’s problems.To be eligible to be a current Microsoft partner we have to have some certifications and prove some growth. Our growth is well and truly above their requirements EXCEPT, they only count customers with seats between 11-300. Your local doctor has an average seat count of 7.5, and we look after more than 300 of these customers.An MSP half our size but without our specialty would easily qualify to be a Microsoft partner. But because we focus on and work almost exclusively for doctors, we can’t be. Although we’re growing much faster than they require, they don’t count our growth.I have taken this up with Partner support and obviously they can’t change the rules, as unfair as they may be. How can we get some common sense applied to this problem, or are Microsoft happy to discriminate against the most important industry in the country? Read More
修改信箱安全信策略,備用信箱無法收到驗證碼
註冊五個outlook信箱使用,分別都已經設定備用信箱了,但近期要修改新增備用信箱,在輸入原先的備用信箱收取驗證碼時,出現『我們無法傳送驗證碼。請再試一次。』,已經測試很多次都出現這訊息,導致我無法進入安全策略裡面設定,請問這是什麼原因?以及如何排除問題?
PS. 有確認原先的備用信箱輸入是正確的
註冊五個outlook信箱使用,分別都已經設定備用信箱了,但近期要修改新增備用信箱,在輸入原先的備用信箱收取驗證碼時,出現『我們無法傳送驗證碼。請再試一次。』,已經測試很多次都出現這訊息,導致我無法進入安全策略裡面設定,請問這是什麼原因?以及如何排除問題?PS. 有確認原先的備用信箱輸入是正確的 Read More
Conversion of abc unbalanced waveforms to balanced symmetrical components waveforms
I want to convert my three phase unbalanced sinusoidal waveforms to its balanced symmetrical component waveforms. I’m getting phasor(real and imaginary) of symmetrical component waveform but I want the sinusoidal waveform. How can I extract that.
Please help me regarding to this.
Thanking YouI want to convert my three phase unbalanced sinusoidal waveforms to its balanced symmetrical component waveforms. I’m getting phasor(real and imaginary) of symmetrical component waveform but I want the sinusoidal waveform. How can I extract that.
Please help me regarding to this.
Thanking You I want to convert my three phase unbalanced sinusoidal waveforms to its balanced symmetrical component waveforms. I’m getting phasor(real and imaginary) of symmetrical component waveform but I want the sinusoidal waveform. How can I extract that.
Please help me regarding to this.
Thanking You symmetrical components, three phase, unbalanced MATLAB Answers — New Questions
Match Block Sizes in Simulink (2021b)
Does anyone know how to add "Match Size" function in Simulink? I have been using this function a lot since 2013b but I was shocked by not seeing this feature in 2021b. The shortcut for this was Ctlr+A+S. Thanks in advance!Does anyone know how to add "Match Size" function in Simulink? I have been using this function a lot since 2013b but I was shocked by not seeing this feature in 2021b. The shortcut for this was Ctlr+A+S. Thanks in advance! Does anyone know how to add "Match Size" function in Simulink? I have been using this function a lot since 2013b but I was shocked by not seeing this feature in 2021b. The shortcut for this was Ctlr+A+S. Thanks in advance! keyboard shortcuts, simulink, 2021b, modify block diagram appearance MATLAB Answers — New Questions
How can I optimize the the code below which assigns grey-scale color values to a color array from pixels in an image?
Hi!
I am trying to optimise the code below, which is the only part of a program i have made which uses for loops, and as such is fairly time consuming.
The program essentially uses two cameras, with their positions calibrated and defined relative to the position of an object they are imaging. The object is created in the virtual space by importing it as a point cloud and converting this to a triangulated surface. The code uses information about the normal vector to each triangle on the surface to figure out whether each camera can actually "see" that point on the object in the real world, and then uses the relevant pixel data for that point (each point is mapped onto the image to find out which pixel refers to it) to assign the correct colour value to the color array for the point cloud.
Any advice would be appreciated, I’m fairly new to matlab and definitely new to the concept of eliminating for loops and code optimisation.
If i haven’t explained the code clearly enough please let me know.
Thanks,
Toby
%Assign color to points if the surface faces towards the relevant camera
for m=1:numberoffaces
camera1vector=locationcamera1-P(m,:);
camera2vector=locationcamera2-P(m,:);
if (dot(camera1vector,F(m,:)))<0
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
elseif (dot(camera2vector,F(m,:)))<0
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
else
colors(m)= (0);
colors(m)= (0);
colors(m)= (0);
end
endHi!
I am trying to optimise the code below, which is the only part of a program i have made which uses for loops, and as such is fairly time consuming.
The program essentially uses two cameras, with their positions calibrated and defined relative to the position of an object they are imaging. The object is created in the virtual space by importing it as a point cloud and converting this to a triangulated surface. The code uses information about the normal vector to each triangle on the surface to figure out whether each camera can actually "see" that point on the object in the real world, and then uses the relevant pixel data for that point (each point is mapped onto the image to find out which pixel refers to it) to assign the correct colour value to the color array for the point cloud.
Any advice would be appreciated, I’m fairly new to matlab and definitely new to the concept of eliminating for loops and code optimisation.
If i haven’t explained the code clearly enough please let me know.
Thanks,
Toby
%Assign color to points if the surface faces towards the relevant camera
for m=1:numberoffaces
camera1vector=locationcamera1-P(m,:);
camera2vector=locationcamera2-P(m,:);
if (dot(camera1vector,F(m,:)))<0
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
elseif (dot(camera2vector,F(m,:)))<0
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
else
colors(m)= (0);
colors(m)= (0);
colors(m)= (0);
end
end Hi!
I am trying to optimise the code below, which is the only part of a program i have made which uses for loops, and as such is fairly time consuming.
The program essentially uses two cameras, with their positions calibrated and defined relative to the position of an object they are imaging. The object is created in the virtual space by importing it as a point cloud and converting this to a triangulated surface. The code uses information about the normal vector to each triangle on the surface to figure out whether each camera can actually "see" that point on the object in the real world, and then uses the relevant pixel data for that point (each point is mapped onto the image to find out which pixel refers to it) to assign the correct colour value to the color array for the point cloud.
Any advice would be appreciated, I’m fairly new to matlab and definitely new to the concept of eliminating for loops and code optimisation.
If i haven’t explained the code clearly enough please let me know.
Thanks,
Toby
%Assign color to points if the surface faces towards the relevant camera
for m=1:numberoffaces
camera1vector=locationcamera1-P(m,:);
camera2vector=locationcamera2-P(m,:);
if (dot(camera1vector,F(m,:)))<0
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
colors(m)= imageGL(pixelposition1(m,2),pixelposition1(m,1));
elseif (dot(camera2vector,F(m,:)))<0
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
colors(m)= imageGR(pixelposition2(m,2),pixelposition2(m,1));
else
colors(m)= (0);
colors(m)= (0);
colors(m)= (0);
end
end point cloud, for loop, for, performance, optimization, calibration, colormap, image processing, stereo imaging MATLAB Answers — New Questions
convert double to signed int
Hi,
Some doubt: if A=40000 = 0x9C40 and I’d like that 0x9C40 would be as signed integer – what shall be done?
executing int16, I’m getting
int16(A) = 32767
Any suggestion so that I’ll get -25,536 ?Hi,
Some doubt: if A=40000 = 0x9C40 and I’d like that 0x9C40 would be as signed integer – what shall be done?
executing int16, I’m getting
int16(A) = 32767
Any suggestion so that I’ll get -25,536 ? Hi,
Some doubt: if A=40000 = 0x9C40 and I’d like that 0x9C40 would be as signed integer – what shall be done?
executing int16, I’m getting
int16(A) = 32767
Any suggestion so that I’ll get -25,536 ? type conversion MATLAB Answers — New Questions
How to Convert OST to PST Free?
Download Advik OST to PST Converter software for Windows. This tool will convert OST file to PST with same folder structure. Thus, no data loss will take place. It will export all emails, calendar, contacts, notes, etc data from .ost into .pst format.
YOu can download the software and try it for free.
Steps to Convert OST to PST
Launch Advik OST to PST Converter in your PC.Click Select Files and add Add OST file in software.Select mailbox folders and click Next.Choose PST as saving option.Click Convert button.
Done! The software will start converting OST file in PST format automatically.
Download Advik OST to PST Converter software for Windows. This tool will convert OST file to PST with same folder structure. Thus, no data loss will take place. It will export all emails, calendar, contacts, notes, etc data from .ost into .pst format. YOu can download the software and try it for free. Steps to Convert OST to PSTLaunch Advik OST to PST Converter in your PC.Click Select Files and add Add OST file in software.Select mailbox folders and click Next.Choose PST as saving option.Click Convert button.Done! The software will start converting OST file in PST format automatically. Read More
Azure Chaos Studio supports new fault for Azure Event Hub
Azure Chaos Studio supports new fault for Azure Event Hubs.
Azure Chaos Studio is a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. Chaos engineering is a methodology by which you inject real-world faults into your application to run controlled fault injection experiments.
Azure Chaos Studio has added a new fault action for Azure Event Hubs called Change Event Hub State.
This fault action lets users disable entities within a targeted Azure Event Hubs namespace either partially or fully to test messaging infrastructure for maintenance or failure scenarios for an application dependent on an Event Hub.
The fault can be used in the Azure portal by designing experiments, deploying templates, or using the REST API. The fault library contains more information and examples.
This article will cover the how-to setup the fault action in Azure Chaos Studio for Azure Event Hub called Change Event Hub State.
Create Event Hubs namespace
Step 1: Go to Azure Portal – https://portal.azure.com/ ; Login with your userId and password.
Step 2: Click on Create a resource and then select Event Hubs.
Step 3: Click on Create event hubs namespace.
Step 4: Click on Review + Create.
Step 5: Click on Create.
Step 6: Click on Go to resource.
Create Event Hub
Step 1: Now create the Event Hub.
Step 2: Click on Event Hub
Step 3: Provide a suitable name to the event Hub. Then Click on review & Create.
Step 4: Click on Create.
The Event Hub is created.
Chaos Studio
Step 1: Now Create Chaos Studio
Step 2: Click on Target
Step 3: You will be able to view the Event Hub namespace created by earlier.
Step 4: Select on Eventhubnamespace Created and click on “Enable targets”.
Step 5: Click on Review+ Enable
Step 6: Click on Enable
Step 7: Click on Go to Resource
Step 7: Go to Chaos Studio, by searching Chaos Studio in the Search bar.
Step 8: Click on Create.
Step 9: Provide a suitable name to the experiment. Click on Experiment Designer.
Step 10: Add the Action.
Step 11: Firstly, add the fault to disable the Azure Event Hub.
Step 12: In Faults dropdown select the Change Event Hub State.
Chage the event hub state to “Disable”.
Step 13: Click on Target Resources.
Step 14: On Target Resources Select the radio button “Manually select from a list”. Select your Event hub namespace. And Click on Add.
Step 15: Click on Add Delay.
Then change the Duration to the desired delay. In this case, I have added a 1-minute delay. Click on Add.
This means that when this experiment runs, it will first disable the Event Hub for the duration of 1 minute.
In the next step, we will change the Event Hub State back to Active.
Step 16: Now again add the fault and select the Change Event Hub state, like you did in Step 11.
Step 17: Now set the desiredState as Active.
Step 18: Click on Target Resources and select the Event Hub namespace like you did in previous step and click on Add.
Step 19: Click on Review and Create.
Step 20: Click on Create.
Step 21: Click on Go to resource.
Step 22: Now click on Identity.
Step 23: Click on Azure Add role assignments. Change the role to Azure Event Hub Data Owner and Save it.
Step 24: Click on Overview. The status will change to Running after approximately a min.
Step 25: Once the state is running. Go to your Event Hub. You will notice that state is disabled.
Step 26: As we have added the delay of 1 min in our experiment setup earlier, the event hub state change to Active after a minute.
Microsoft Tech Community – Latest Blogs –Read More
Error using cell/unique (line 85) Cell array input must be a cell array of character vectors.
i have created a cell to store [4×4] matrices. total matrices in a cell are 10. i want to find unique matrices of that cell and their occurrence.???
how can i do that in matlab. unique wont woks with a cell having matrix entries.i have created a cell to store [4×4] matrices. total matrices in a cell are 10. i want to find unique matrices of that cell and their occurrence.???
how can i do that in matlab. unique wont woks with a cell having matrix entries. i have created a cell to store [4×4] matrices. total matrices in a cell are 10. i want to find unique matrices of that cell and their occurrence.???
how can i do that in matlab. unique wont woks with a cell having matrix entries. unique, matrix similarity MATLAB Answers — New Questions
Problems with connecting to the ROS master using rosinit
I am running ros on a raspberry pi 4 and used the following code to try to establish a global ROS node
setenv(‘ROS_MASTER_URI’,’http://192.168.1.2:11311′)
setenv(‘ROS_IP’,’192.168.1.1′)
rosinit(‘192.168.1.2’)
and the error message that pops up on rosinit is
‘ Connection to process with Exchange: "ce85a6ab-8a96-40e9-9f51-c8fa6ac4ded8 " was lost.
It never happened before as I was able to establish the connect previously, and neither using rosshutdown or rebooting the raspberry pi workedI am running ros on a raspberry pi 4 and used the following code to try to establish a global ROS node
setenv(‘ROS_MASTER_URI’,’http://192.168.1.2:11311′)
setenv(‘ROS_IP’,’192.168.1.1′)
rosinit(‘192.168.1.2’)
and the error message that pops up on rosinit is
‘ Connection to process with Exchange: "ce85a6ab-8a96-40e9-9f51-c8fa6ac4ded8 " was lost.
It never happened before as I was able to establish the connect previously, and neither using rosshutdown or rebooting the raspberry pi worked I am running ros on a raspberry pi 4 and used the following code to try to establish a global ROS node
setenv(‘ROS_MASTER_URI’,’http://192.168.1.2:11311′)
setenv(‘ROS_IP’,’192.168.1.1′)
rosinit(‘192.168.1.2’)
and the error message that pops up on rosinit is
‘ Connection to process with Exchange: "ce85a6ab-8a96-40e9-9f51-c8fa6ac4ded8 " was lost.
It never happened before as I was able to establish the connect previously, and neither using rosshutdown or rebooting the raspberry pi worked ros, rosinit MATLAB Answers — New Questions