Month: August 2024
What is the best way to convert webp to jpg on Windows?
Hi,
I’m currently working on a project and have encountered a bit of a problem. I’ve downloaded a large number of images in WebP format, but I need them to be in JPG format for compatibility with the software I’m using.
I’m using a Windows PC and am not sure what the best method is to convert WebP images to JPG. I’ve come across a few online webp converters, but I’m hesitant to use them because of the file size and quantity.
Ideally, I prefer a solution that I can use offline and that allows for batch processing since I have quite a few images to convert. Appreciate any recommendations for reliable software or methods that you’ve personally used.
Hi, I’m currently working on a project and have encountered a bit of a problem. I’ve downloaded a large number of images in WebP format, but I need them to be in JPG format for compatibility with the software I’m using. I’m using a Windows PC and am not sure what the best method is to convert WebP images to JPG. I’ve come across a few online webp converters, but I’m hesitant to use them because of the file size and quantity. Ideally, I prefer a solution that I can use offline and that allows for batch processing since I have quite a few images to convert. Appreciate any recommendations for reliable software or methods that you’ve personally used. Read More
Understanding Health Bot Custom Logging Custom Dimension
Microsoft Health Bot has the ability to emit custom logging into customer supplied Application Insights instrumentation key. See here, for more details.
Microsoft Health Bot has the ability to emit custom logging into customer supplied Application Insights instrumentation key. See here, for more details. Read More
Allow IOC and Linux agent – SHA1 / SHA256 supported?
Hello,
I was wondering if there are any limitations in the Linux agent with regard to the supported hash methods: will a SHA256 IOC work for Linux or do I have to use SHA1?
I’m asking because I tried entering a SHA256 IOC and (at first glance) it does not work even after several hours. At the same time DeviceEvents and other tables only show SHA1 values for files so I wondered if SHA256 is ever calculated?
Hello, I was wondering if there are any limitations in the Linux agent with regard to the supported hash methods: will a SHA256 IOC work for Linux or do I have to use SHA1? I’m asking because I tried entering a SHA256 IOC and (at first glance) it does not work even after several hours. At the same time DeviceEvents and other tables only show SHA1 values for files so I wondered if SHA256 is ever calculated? Read More
Trouble Installing Microsoft Site Recovery – Process Server Service Keeps Terminating
Hey everyone,
I’m in the middle of migrating our on-premise servers to the Azure cloud, and I’ve hit a snag that I can’t seem to get past. I’m at the step where I need to install the Microsoft Site Recovery agent on our appliance, but I’m stuck on the last part—validating the server configuration.
Every time I run the installer, it fails during the validation step. At the same time, I’ve noticed that the “Process Server” service keeps getting terminated, and I’m unable to enable it again.
Has anyone encountered this issue before? Any tips on how to get past this, or what might be causing the Process Server service to keep failing? I’m really stuck here and could use some advice!
Thanks in advance for any help.
Hey everyone, I’m in the middle of migrating our on-premise servers to the Azure cloud, and I’ve hit a snag that I can’t seem to get past. I’m at the step where I need to install the Microsoft Site Recovery agent on our appliance, but I’m stuck on the last part—validating the server configuration. Every time I run the installer, it fails during the validation step. At the same time, I’ve noticed that the “Process Server” service keeps getting terminated, and I’m unable to enable it again. Has anyone encountered this issue before? Any tips on how to get past this, or what might be causing the Process Server service to keep failing? I’m really stuck here and could use some advice! Thanks in advance for any help. Read More
How to downgrade Windows Server 2019 Datacenter Evaluation to Windows Server 2019 Standard
I have a Windows Server 2019 standard license. However, I accidentally installed Windows Server 2019 Datacenter Evaluation, so I want to downgrade back to Windows Server 2019 Standard. Please help me. Thank you.
I have a Windows Server 2019 standard license. However, I accidentally installed Windows Server 2019 Datacenter Evaluation, so I want to downgrade back to Windows Server 2019 Standard. Please help me. Thank you. Read More
Is it possible to realize self-supervised RL by adding auxiliary loss to the loss of Critic of PPO agent?
I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you!I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you! I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you! self-supervised rl, auxiliary loss, loss of critic, rlppoagent MATLAB Answers — New Questions
How to extract specific matrix after implemeting svd function
I want to solve PEB minimization problem in sort of RIS problem. So I have formulated SDP problem via ‘CVX’ and at the end of the CVX formulation, using built-in svd(or svds) function to extract RIS phase profile matrix F(with size M times T, M=# of RIS elements and T is # of transmissions). Optimal solution of CVX is X which has size M by M. And X is FF^H. From X F is extracted by using svds as size of M by T.
The code is below,
M = signal.M;
T = signal.T;
% k-th column of identity matrix
e1 = [1; 0; 0];
e2 = [0; 1; 0];
e3 = [0; 0; 1];
% define optimization problem
cvx_begin sdp
variable X(M, M) hermitian
variable u(3, 1)
minimize(sum(u))
subject to
[J_car(1:3, 1:3), e1; e1′, u(1)] >= 0;
[J_car(1:3, 1:3), e2; e2′, u(2)] >= 0;
[J_car(1:3, 1:3), e3; e3′, u(3)] >= 0;
trace(X) == M * T;
X >= 0;
cvx_end
optimX = X;
[U, S, V] = svds(optimX, T);
num_singular_values = min(size(S, 1), T);
optimF = U(:, 1:num_singular_values) * sqrt(S(1:num_singular_values, 1:num_singular_values));
end
and the optimization problem is:
Then my questions are:
Is it correct method using ‘svd’ to extract F(size M by T) from optimal solution X?
If not, what method can I try to? If possible, comment breif code for it.
It is not programming issue, but about mathematical, Is sum of all elements in auxiliary variable(objective for (12)) same as objective for (11)?I want to solve PEB minimization problem in sort of RIS problem. So I have formulated SDP problem via ‘CVX’ and at the end of the CVX formulation, using built-in svd(or svds) function to extract RIS phase profile matrix F(with size M times T, M=# of RIS elements and T is # of transmissions). Optimal solution of CVX is X which has size M by M. And X is FF^H. From X F is extracted by using svds as size of M by T.
The code is below,
M = signal.M;
T = signal.T;
% k-th column of identity matrix
e1 = [1; 0; 0];
e2 = [0; 1; 0];
e3 = [0; 0; 1];
% define optimization problem
cvx_begin sdp
variable X(M, M) hermitian
variable u(3, 1)
minimize(sum(u))
subject to
[J_car(1:3, 1:3), e1; e1′, u(1)] >= 0;
[J_car(1:3, 1:3), e2; e2′, u(2)] >= 0;
[J_car(1:3, 1:3), e3; e3′, u(3)] >= 0;
trace(X) == M * T;
X >= 0;
cvx_end
optimX = X;
[U, S, V] = svds(optimX, T);
num_singular_values = min(size(S, 1), T);
optimF = U(:, 1:num_singular_values) * sqrt(S(1:num_singular_values, 1:num_singular_values));
end
and the optimization problem is:
Then my questions are:
Is it correct method using ‘svd’ to extract F(size M by T) from optimal solution X?
If not, what method can I try to? If possible, comment breif code for it.
It is not programming issue, but about mathematical, Is sum of all elements in auxiliary variable(objective for (12)) same as objective for (11)? I want to solve PEB minimization problem in sort of RIS problem. So I have formulated SDP problem via ‘CVX’ and at the end of the CVX formulation, using built-in svd(or svds) function to extract RIS phase profile matrix F(with size M times T, M=# of RIS elements and T is # of transmissions). Optimal solution of CVX is X which has size M by M. And X is FF^H. From X F is extracted by using svds as size of M by T.
The code is below,
M = signal.M;
T = signal.T;
% k-th column of identity matrix
e1 = [1; 0; 0];
e2 = [0; 1; 0];
e3 = [0; 0; 1];
% define optimization problem
cvx_begin sdp
variable X(M, M) hermitian
variable u(3, 1)
minimize(sum(u))
subject to
[J_car(1:3, 1:3), e1; e1′, u(1)] >= 0;
[J_car(1:3, 1:3), e2; e2′, u(2)] >= 0;
[J_car(1:3, 1:3), e3; e3′, u(3)] >= 0;
trace(X) == M * T;
X >= 0;
cvx_end
optimX = X;
[U, S, V] = svds(optimX, T);
num_singular_values = min(size(S, 1), T);
optimF = U(:, 1:num_singular_values) * sqrt(S(1:num_singular_values, 1:num_singular_values));
end
and the optimization problem is:
Then my questions are:
Is it correct method using ‘svd’ to extract F(size M by T) from optimal solution X?
If not, what method can I try to? If possible, comment breif code for it.
It is not programming issue, but about mathematical, Is sum of all elements in auxiliary variable(objective for (12)) same as objective for (11)? svd, ris, mimo, cvx, optim MATLAB Answers — New Questions
SharePoint Hybrid Content Types not syncing
So, Microsoft documentation regarding the configuration and management of hybrid content types (here, and here) is limited to say the least. Despite a fair amount of troubleshooting with authentication we appear to have a correct configuration. However no online content types are synced to on-prem.
Setup
1. Created a local content type in SP on-prem (Subscription edition).
2. Ran the PowerShell script to replicate the on-prem content type to online:
$credential = Get-Credential
Copy-SPContentTypes -LocalSiteUrl https://spserver/sites/contenttypehub -LocalTermStoreName “Managed Metadata Service” -RemoteSiteUrl https://domain.sharepoint.com/ -ContentTypeNames @(“ContentType”) -Credential $credential
NOTE 1: the required credential in this command is an Azure account with at least SharePoint Admin role.
NOTE 2: the command sends legacy authentication so the Azure account used MUST NOT HAVE MFA enabled. You may also need to ensure that the account authentication is not blocked by a Conditional Access rule, otherwise you get the misleading error “Copy-SPContentTypes : The sign-in name or password does not match one in the Microsoft account system.”
3. Ran the hybrid wizard to configure Taxonomy and Content Type Synchronisation.
All of the above was completed successfully.
– The on-prem content type was replicated to online.
– The hybrid wizard ran with no errors.
– The server timer job “Content Type Replication” runs on schedule or manually with no errors.
Problem
No online content types are synced to on-prem.
—
There is no documentation regarding how the ongoing timer job synchronisation works, i.e. which account and authentication it uses, which URL’s or ports it communicates on, whether it initiates a push from the cloud or pull from on-prem…
So, Microsoft documentation regarding the configuration and management of hybrid content types (here, and here) is limited to say the least. Despite a fair amount of troubleshooting with authentication we appear to have a correct configuration. However no online content types are synced to on-prem. Setup1. Created a local content type in SP on-prem (Subscription edition).2. Ran the PowerShell script to replicate the on-prem content type to online: $credential = Get-Credential
Copy-SPContentTypes -LocalSiteUrl https://spserver/sites/contenttypehub -LocalTermStoreName “Managed Metadata Service” -RemoteSiteUrl https://domain.sharepoint.com/ -ContentTypeNames @(“ContentType”) -Credential $credential NOTE 1: the required credential in this command is an Azure account with at least SharePoint Admin role.NOTE 2: the command sends legacy authentication so the Azure account used MUST NOT HAVE MFA enabled. You may also need to ensure that the account authentication is not blocked by a Conditional Access rule, otherwise you get the misleading error “Copy-SPContentTypes : The sign-in name or password does not match one in the Microsoft account system.” 3. Ran the hybrid wizard to configure Taxonomy and Content Type Synchronisation. All of the above was completed successfully.- The on-prem content type was replicated to online.- The hybrid wizard ran with no errors.- The server timer job “Content Type Replication” runs on schedule or manually with no errors. ProblemNo online content types are synced to on-prem. –There is no documentation regarding how the ongoing timer job synchronisation works, i.e. which account and authentication it uses, which URL’s or ports it communicates on, whether it initiates a push from the cloud or pull from on-prem… Read More
Incident mails for Sentinel Alerts/Incidents
Hi everyone,
we integrated Sentinel with Defender and now get alerts from Sentinel into Defender XDR. But they do not trigger any mail. If i look at the mail notifications i cannot see “Sentinel” as a service.
Is this by design? How to get mails from Sentinel Alerts?
BR
Stephan
Hi everyone, we integrated Sentinel with Defender and now get alerts from Sentinel into Defender XDR. But they do not trigger any mail. If i look at the mail notifications i cannot see “Sentinel” as a service.Is this by design? How to get mails from Sentinel Alerts? BRStephan Read More
Link URL Teams to a username chat of the Edge Bar
I want to generate a URL to open a user chat with the Microsoft Teams version that is now available with Microsoft Edge. Is this possible?
I want to generate a URL to open a user chat with the Microsoft Teams version that is now available with Microsoft Edge. Is this possible? <a href=”msteams://”>Link</a> (Open Teams app) <a href={‘https://teams.microsoft.com/l/chat/0/0?users=email address removed for privacy reasons} (Open Username chat Teams app) Thank you! Read More
Google Workspace to microsoft 365 migration
When we are migrating from Google workspace to Microsoft 365 using Microsoft automated migration method, what are the chances of emails getting duplicated if in Google an email has multiple labels tagged.
And also, what’s the limit of user data we can migrate per day and how many concurrent users we can migrate?
When we are migrating from Google workspace to Microsoft 365 using Microsoft automated migration method, what are the chances of emails getting duplicated if in Google an email has multiple labels tagged. And also, what’s the limit of user data we can migrate per day and how many concurrent users we can migrate? Read More
Filter By Form started crashing Access
Issue: Filter By Form started crashing Access
Causes DB to crash when the function is applied in a Form. ‘Filter in Form’ function does NOT crash when applied to a table.
What I can establish:
Started in the last few of days. I believe after recent Office update.The crash occurs on previous versions of my DB (going back months) which have worked fine until last weekDid NOT happen with the same DB on a different PC that had the previous version of Office. However after that PC was updated (overnight), the crash started happening.Function crashes and does weird stuff in a new template DB (i.e. one pulled from Microsoft Template DB list when you click New in Access. I used the “Time and Billing Template”Function does not crash on all DBs I have
Other information
I do not have any add-ins or VB code in my DBsI am using a brand-new Lenovo Yoga PC and a Surface Pro. Same issue on both.
Issue: Filter By Form started crashing AccessCauses DB to crash when the function is applied in a Form. ‘Filter in Form’ function does NOT crash when applied to a table. What I can establish:Started in the last few of days. I believe after recent Office update.The crash occurs on previous versions of my DB (going back months) which have worked fine until last weekDid NOT happen with the same DB on a different PC that had the previous version of Office. However after that PC was updated (overnight), the crash started happening.Function crashes and does weird stuff in a new template DB (i.e. one pulled from Microsoft Template DB list when you click New in Access. I used the “Time and Billing Template”Function does not crash on all DBs I haveOther informationI do not have any add-ins or VB code in my DBsI am using a brand-new Lenovo Yoga PC and a Surface Pro. Same issue on both. Read More
How to recover data from a corrupted flash drive on Windows PC?
I’m in need of some urgent help. My USB flash drive has suddenly become corrupted, and I can’t access any of the files on it. I’m using a Windows 10 PC, and every time I try to open the drive, I either get an error message saying the drive needs to be formatted, or it just doesn’t show any of my files.
I have some important documents and photos on the drive that I really need to recover. I haven’t formatted the drive yet, as I know this could lead to permanent data loss. Can anyone guide me on how to recover data from a corrupted flash drive on Windows PC? I’ve heard there might be some software tools or methods that can help, but I’m not sure where to start.
I’m in need of some urgent help. My USB flash drive has suddenly become corrupted, and I can’t access any of the files on it. I’m using a Windows 10 PC, and every time I try to open the drive, I either get an error message saying the drive needs to be formatted, or it just doesn’t show any of my files. I have some important documents and photos on the drive that I really need to recover. I haven’t formatted the drive yet, as I know this could lead to permanent data loss. Can anyone guide me on how to recover data from a corrupted flash drive on Windows PC? I’ve heard there might be some software tools or methods that can help, but I’m not sure where to start. Read More
Comparing Microsoft Cloud Email Services
HVE and ECS are two competing Microsoft Cloud Email Services. At least, they seem to compete. In reality, HVE and ECS serve different target audiences. HVE is all about internal email services for apps and devices while ECS is for high volume external mailings like customer newsletters. We tested both services by sending subscription reminder notifications to Office 365 for IT Pros readers.
https://office365itpros.com/2024/08/13/microsoft-cloud-email-services/
HVE and ECS are two competing Microsoft Cloud Email Services. At least, they seem to compete. In reality, HVE and ECS serve different target audiences. HVE is all about internal email services for apps and devices while ECS is for high volume external mailings like customer newsletters. We tested both services by sending subscription reminder notifications to Office 365 for IT Pros readers.
https://office365itpros.com/2024/08/13/microsoft-cloud-email-services/ Read More
Asynchronous HTTP APIs with Azure Container Apps jobs
When building HTTP APIs, it can be tempting to synchronously run long-running tasks in a request handler. This approach can lead to slow responses, timeouts, and resource exhaustion. If a request times out or a connection is dropped, the client won’t know if the operation completed or not. For CPU-bound tasks, this approach can also bog down the server, making it unresponsive to other requests.
In this post, we’ll look at how to build an asynchronous HTTP API with Azure Container Apps. We’ll create a simple API that implements the Asynchronous Request-Reply pattern: with the API hosted in a container app and the asynchronous processing done in a job. This approach provides a much more robust and scalable solution for long-running tasks.
Long-running API requests in Azure Container Apps
Azure Container Apps is a serverless container platform. It’s ideal for hosting a variety of workloads, including HTTP APIs.
Like other serverless and PaaS platforms, Azure Container Apps is designed for short-lived requests — its ingress currently has a maximum timeout of 4 minutes. As an autoscaling platform, it’s designed to scale dynamically based on the number of incoming requests. When scaling in, replicas are removed. Long-running requests can terminate abruptly if the replica handling the request is removed.
Azure Container Apps jobs
Azure Container Apps has two types of resources: apps and jobs. Apps are long-running services that respond to HTTP requests or events. Jobs are tasks that run to completion and can be triggered by a schedule or an event.
Jobs can also be triggered programmatically. This makes them a good fit for implementing asynchronous processing in an HTTP API. The API can start a job execution to process the request and return a response immediately. The job can then take as long as it needs to complete the processing. The client can poll a status endpoint on the app to check if the job has completed and get the result.
The Asynchronous Request-Reply pattern
Asynchronous Request-Reply is a common pattern for handling long-running operations in HTTP APIs. Instead of waiting for the operation to complete, the API returns a status code indicating that the operation has started. The client can then poll the API to check if the operation has completed.
Here’s how the pattern applies to Azure Container Apps:
The client sends a request to the API (hosted as a container app) to start the operation.
The API saves the request (our example uses Azure Cosmos DB), starts a job to process the operation, and returns a 202 Accepted status code with a Location header pointing to a status endpoint.
The client polls the status endpoint. While the operation is in progress, the status endpoint returns a 200 OK status code with a Retry-After header indicating when the client should poll again.
When the operation is complete, the status endpoint returns a 303 See Other status code with a Location header pointing to the result. The client automatically follows the redirect to get the result.
Async HTTP API app
You can find the source code in this GitHub repository.
The API is a simple Node.js app that uses Fastify. It demonstrates how to build an async HTTP API that accepts orders and offloads the processing of the orders to jobs. The app has a few simple endpoints.
POST /orders
This endpoint accepts an order in its body. It saves the order to Cosmos DB with a status of “pending” and starts a job execution to process the order.
fastify.post(‘/orders’, async (request, reply) => {
const orderId = randomUUID()
// save order to Cosmos DB
await container.items.create({
id: orderId,
status: ‘pending’,
order: request.body,
})
// start job execution
await startProcessorJobExecution(orderId)
// return 202 Accepted with Location header
reply.code(202).header(‘Location’, ‘/orders/status/’ + orderId).send()
})
We’ll take a look at the job later in this article. In the above code snippet, startProcessorJobExecution is a function that starts the job execution. It uses the Azure Container Apps management SDK to start the job.
const credential = new DefaultAzureCredential()
const containerAppsClient = new ContainerAppsAPIClient(credential, subscriptionId)
// …
async function startProcessorJobExecution(orderId) {
// get the existing job’s template
const { template: processorJobTemplate } =
await containerAppsClient.jobs.get(resourceGroupName, processorJobName)
// add the order ID to the job’s environment variables
const environmentVariables = processorJobTemplate.containers[0].env
environmentVariables.push({ name: ‘ORDER_ID’, value: orderId })
const jobStartTemplate = { template: processorJobTemplate }
// start the job execution with the modified template
const jobExecution = await containerAppsClient.jobs.beginStartAndWait(
resourceGroupName, processorJobName, {
template: processorJobTemplate,
}
)
}
The job takes the order ID as an environment variable. To set the environment variable, we start the job execution with a modified template that includes the order ID.
We use managed identities to authenticate with both the Azure Container Apps management SDK and the Cosmos DB SDK.
GET /orders/status/:orderId
The previous endpoint returns a 202 Accepted status code with a Location header pointing to this status endpoint. The client can poll this endpoint to check the status of the order.
This request handler retrieves the order from Cosmos DB. If the order is still pending, it returns a 200 OK status code with a Retry-After header indicating when the client should poll again. If the order is complete, it returns a 303 See Other status code with a Location header pointing to the result.
fastify.get(‘/orders/status/:orderId’, async (request, reply) => {
const { orderId } = request.params
// get the order from Cosmos DB
const { resource: item } = await container.item(orderId, orderId).read()
if (item === undefined) {
reply.code(404).send()
return
}
if (item.status === ‘pending’) {
reply.code(200).headers({
‘Retry-After’: 10,
}).send({ status: item.status })
} else {
reply.code(303).header(‘Location’, ‘/orders/’ + orderId).send()
}
})
GET /orders/:orderId
This endpoint returns the result of the order processing. The status endpoint redirects to this resource when the order is complete. It retrieves the order from Cosmos DB and returns it.
fastify.get(‘/orders/:orderId’, async (request, reply) => {
const { orderId } = request.params
// get the order from Cosmos DB
const { resource: item } = await container.item(orderId, orderId).read()
if (item === undefined || item.status === ‘pending’) {
reply.code(404).send()
return
}
if (item.status === ‘completed’) {
reply.code(200).send({ id: item.id, status: item.status, order: item.order })
} else if (item.status === ‘failed’) {
reply.code(500).send({ id: item.id, status: item.status, error: item.error })
}
})
Order processor job
The order processor job is a another Node.js app. As it’s just a demo, it just waits a while, updates the order status in Cosmos DB, and exits. In a real-world scenario, the job would process the order, update the order status, and possibly send a notification.
We deploy it as a job in Azure Container Apps. The POST /orders endpoint above starts the job execution. The job takes the order ID as an environment variable and uses it to update the order status in Cosmos DB.
Like the API app, the job uses managed identities to authenticate with Azure Cosmos DB.
The code is in the same GitHub repository.
import { DefaultAzureCredential } from ‘@azure/identity’
import { CosmosClient } from ‘@azure/cosmos’
const credential = new DefaultAzureCredential()
const client = new CosmosClient({
endpoint: process.env.COSMOSDB_ENDPOINT,
aadCredentials: credential
})
const database = client.database(‘async-api’)
const container = database.container(‘statuses’)
const orderId = process.env.ORDER_ID
const orderItem = await container.item(orderId, orderId).read()
const orderResource = orderItem.resource
if (orderResource === undefined) {
console.error(‘Order not found’)
process.exit(1)
}
// simulate processing time
const orderProcessingTime = Math.floor(Math.random() * 30000)
console.log(`Processing order ${orderId} for ${orderProcessingTime}ms`)
await new Promise(resolve => setTimeout(resolve, orderProcessingTime))
// update order status in Cosmos DB
orderResource.status = ‘completed’
orderResource.order.completedAt = new Date().toISOString()
await orderItem.item.replace(orderResource)
console.log(`Order ${orderId} processed`)
HTTP client
To call the API and wait for the result, here’s a simple JavaScript function that works just like fetch but waits for the job to complete. It also accepts a callback function that’s called each time the status endpoint is polled so you can log the status or update the UI.
async function fetchAndWait() {
const input = arguments[0]
let init = arguments[1]
let onStatusPoll = arguments[2]
// if arguments[1] is not a function
if (typeof init === ‘function’) {
init = undefined
onStatusPoll = arguments[1]
}
onStatusPoll = onStatusPoll || (async () => {})
// make the initial request
const response = await fetch(input, init)
if (response.status !== 202) {
throw new Error(`Something went wrongnResponse: ${await response.text()}n`)
}
const responseOrigin = new URL(response.url).origin
let statusLocation = response.headers.get(‘Location’)
// if the Location header is not an absolute URL, construct it
statusLocation = new URL(statusLocation, responseOrigin).href
// poll the status endpoint until it’s redirected to the final result
while (true) {
const response = await fetch(statusLocation, {
redirect: ‘follow’
})
if (response.status !== 200 && !response.redirected) {
const data = await response.json()
throw new Error(`Something went wrongnResponse: ${JSON.stringify(data, null, 2)}n`)
}
// redirected, return final result and stop polling
if (response.redirected) {
const data = await response.json()
return data
}
// the Retry-After header indicates how long to wait before polling again
const retryAfter = parseInt(response.headers.get(‘Retry-After’)) || 10
// call the onStatusPoll callback so we can log the status or update the UI
await onStatusPoll({
response,
retryAfter,
})
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000))
}
}
To use the function, we call it just like fetch. We pass an additional argument that’s a callback function that’s invoked each time the status endpoint is polled.
const order = await fetchAndWait(‘/orders’, {
method: ‘POST’,
headers: {
‘Content-Type’: ‘application/json’
},
body: JSON.stringify({
“customer”: “Contoso”,
“items”: [
{
“name”: “Apple”,
“quantity”: 5
},
{
“name”: “Banana”,
“quantity”: 3
},
],
})
}, async ({ response, retryAfter }) => {
const { status } = await response.json()
const requestUrl = response.url
messagesDiv.innerHTML += `Order status: ${status}; retrying in ${retryAfter} seconds (${requestUrl})n`
})
// display the final result
document.querySelector(‘#order’).innerHTML = JSON.stringify(order, null, 2)
If we run this in the browser, we can open up dev tools and see all the HTTP requests that are made.
In the portal, we also can see the job execution history.
Conclusion
With the Asynchronous Request-Reply pattern, we can build robust and scalable HTTP APIs that handle long-running operations. By using Azure Container Apps jobs, we can offload the processing to a job execution that doesn’t consume resources from the API app. This robust approach allows the API to respond quickly and handle many requests concurrently.
Originally posted on anthonychu.ca
Microsoft Tech Community – Latest Blogs –Read More
Is there any utility to display message or variables in custom criteria script of simulink test ?
In custom criteria script of simulink test, the disp() doesnt work and so it is very hard to debug the script. I had to use error() and it doesnt support most of the types like structure, cell etc. Is there any utility available for debuging the script in custom criteria script ?In custom criteria script of simulink test, the disp() doesnt work and so it is very hard to debug the script. I had to use error() and it doesnt support most of the types like structure, cell etc. Is there any utility available for debuging the script in custom criteria script ? In custom criteria script of simulink test, the disp() doesnt work and so it is very hard to debug the script. I had to use error() and it doesnt support most of the types like structure, cell etc. Is there any utility available for debuging the script in custom criteria script ? simulink test, custom criteria script MATLAB Answers — New Questions
HDL Cosimulation with Cadence Xcelium setup
When running the example of "GettingStartedWithSimulinkHDLCosimExample" with Cadence Xcelium , I get these following messages.
Executing nclaunch tclstart commands…
xmsim(64): 22.09-s004: (c) Copyright 1995-2022 Cadence Design Systems, Inc.
xmsim: *W,NOMTDGUI: Multi-Threaded Dumping is disabled for interactive debug mode.
xmsim: *E,STRPIN: Could not initialize SimVision connection: SimVision/Indago process terminated before a connection was established.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 48: cds_plat: command not found
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 101: /tools/cds/xceliummain_22.09.004_Linux/tools./simvision/bin/64bit/simvision.exe: No such file or directory
SimVision/Indago process terminated before a connection could be established.
while executing
"exec <@stdin >@stdout xmsim -gui rcosflt_rtl -64bit -input {@simvision {set w [waveform new]}} -input {@simvision {waveform add -using $w -signals rco…"
("uplevel" body line 1)
invoked from within
"uplevel 1 [join $args]"
(procedure "hdlsimulink" line 22)
invoked from within
"hdlsimulink rcosflt_rtl -64bit -socket 44014 -input "{@simvision {set w [waveform new]}}" -input "{@simvision {waveform add -using $w -signals rc…"
(file "compile_and_launch.tcl" line 66)
ERROR hit any key to exit xterm
Could you please guide me how to set up cosimulation with Cadence Xcelium?When running the example of "GettingStartedWithSimulinkHDLCosimExample" with Cadence Xcelium , I get these following messages.
Executing nclaunch tclstart commands…
xmsim(64): 22.09-s004: (c) Copyright 1995-2022 Cadence Design Systems, Inc.
xmsim: *W,NOMTDGUI: Multi-Threaded Dumping is disabled for interactive debug mode.
xmsim: *E,STRPIN: Could not initialize SimVision connection: SimVision/Indago process terminated before a connection was established.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 48: cds_plat: command not found
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 101: /tools/cds/xceliummain_22.09.004_Linux/tools./simvision/bin/64bit/simvision.exe: No such file or directory
SimVision/Indago process terminated before a connection could be established.
while executing
"exec <@stdin >@stdout xmsim -gui rcosflt_rtl -64bit -input {@simvision {set w [waveform new]}} -input {@simvision {waveform add -using $w -signals rco…"
("uplevel" body line 1)
invoked from within
"uplevel 1 [join $args]"
(procedure "hdlsimulink" line 22)
invoked from within
"hdlsimulink rcosflt_rtl -64bit -socket 44014 -input "{@simvision {set w [waveform new]}}" -input "{@simvision {waveform add -using $w -signals rc…"
(file "compile_and_launch.tcl" line 66)
ERROR hit any key to exit xterm
Could you please guide me how to set up cosimulation with Cadence Xcelium? When running the example of "GettingStartedWithSimulinkHDLCosimExample" with Cadence Xcelium , I get these following messages.
Executing nclaunch tclstart commands…
xmsim(64): 22.09-s004: (c) Copyright 1995-2022 Cadence Design Systems, Inc.
xmsim: *W,NOMTDGUI: Multi-Threaded Dumping is disabled for interactive debug mode.
xmsim: *E,STRPIN: Could not initialize SimVision connection: SimVision/Indago process terminated before a connection was established.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object ‘/tools/matlab/R2023bU1/sys/os/glnxa64/libstdc++.so.6’ from LD_PRELOAD cannot be preloaded: ignored.
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 48: cds_plat: command not found
/tools/cds/xceliummain_22.09.004_Linux/tools.lnx86/simvision/bin/64bit/simvision: line 101: /tools/cds/xceliummain_22.09.004_Linux/tools./simvision/bin/64bit/simvision.exe: No such file or directory
SimVision/Indago process terminated before a connection could be established.
while executing
"exec <@stdin >@stdout xmsim -gui rcosflt_rtl -64bit -input {@simvision {set w [waveform new]}} -input {@simvision {waveform add -using $w -signals rco…"
("uplevel" body line 1)
invoked from within
"uplevel 1 [join $args]"
(procedure "hdlsimulink" line 22)
invoked from within
"hdlsimulink rcosflt_rtl -64bit -socket 44014 -input "{@simvision {set w [waveform new]}}" -input "{@simvision {waveform add -using $w -signals rc…"
(file "compile_and_launch.tcl" line 66)
ERROR hit any key to exit xterm
Could you please guide me how to set up cosimulation with Cadence Xcelium? cosimulation, cadence, hldverifier MATLAB Answers — New Questions
Link one excel file’s cell A1 on Sharepoint to another excel file as a link to navigate to that file
Hi
I want to link one excel file’s cell A1, which is on Sharepoint, to another excel files’s Sharepoint so as to navigate to it upon click.
HiI want to link one excel file’s cell A1, which is on Sharepoint, to another excel files’s Sharepoint so as to navigate to it upon click. Read More
Window 11 Keeps Asking For My PIN Even After I’ve Told It Not To
I had my computer set to ‘Never ask for PIN’ and it was doing just fine until I restarted it and it installed some updates. Now I am back to it asking for my PIN every time it wakes from sleep. I SO do not need this! I live alone and no one else is able to touch my computer. I am the administrator and only user for the computer.
I have opened the netplwiz file, and unchecked the box that says a user must sign in, but the problem persists.
Recently, I also gave Google access to my Microsoft files, and I am wondering if Google is responsible for the problem. I got a popup today saying Google was asking for my PIN. This was not when I was signing into the computer after waking it up, though, but a different situation.
Please help – have tried everything I can think of!
I had my computer set to ‘Never ask for PIN’ and it was doing just fine until I restarted it and it installed some updates. Now I am back to it asking for my PIN every time it wakes from sleep. I SO do not need this! I live alone and no one else is able to touch my computer. I am the administrator and only user for the computer.I have opened the netplwiz file, and unchecked the box that says a user must sign in, but the problem persists. Recently, I also gave Google access to my Microsoft files, and I am wondering if Google is responsible for the problem. I got a popup today saying Google was asking for my PIN. This was not when I was signing into the computer after waking it up, though, but a different situation. Please help – have tried everything I can think of! Read More
Why is element in the queue deleted even if the function throws an exception?
I write an Azure Function with a queue trigger and i want to send the data to the backend service if the backend service is avaiable and if not available then the element should still be in the queue.
My question is how can i achieve this?
my code and host.json looks like this ?
[Function(“QueueCancellations”)]
public async Task<IActionResult> QueueCancellation([QueueTrigger(“requests”, Connection = “ConnectionStrings:QUEUE_CONNECTION_STRING”)] string message)
{
try
{
using (var httpClient = new HttpClient())
{
var content = new StringContent(message, Encoding.UTF8, “application/json”);
var httpResponse = await httpClient.PostAsync(_configuration[“LOCAL_SERVICE_URL_CANCELL”], content);
if (httpResponse.IsSuccessStatusCode)
{
return new OkObjectResult(“Data sent to backend”);
}
else
{
return new BadRequestObjectResult(“Backend not available”);
}
}
}
catch (Exception ex) {
_logger.LogError(ex.Message);
return new BadRequestObjectResult(“Backend not available”);
}
}{
“version”: “2.0”,
“logging”: {
“applicationInsights”: {
“samplingSettings”: {
“isEnabled”: false,
“excludedTypes”: “Request”
},
“enableLiveMetricsFilters”: true
}
},
“logLevel”: {
“default”: “Information”,
“Host.Results”: “Information”,
“functions”: “Information”,
“Host.Aggregator”: “Information”
},
“extensions”: {
“queues”: {
“maxPollingInterval”: “00:00:02”,
“visibilityTimeout”: “00:00:30”,
“batchSize”: 16,
“maxDequeueCount”: 5,
“newBatchThreshold”: 8,
“messageEncoding”: “base64”
}
}
}
I write an Azure Function with a queue trigger and i want to send the data to the backend service if the backend service is avaiable and if not available then the element should still be in the queue. My question is how can i achieve this? my code and host.json looks like this ? [Function(“QueueCancellations”)]
public async Task<IActionResult> QueueCancellation([QueueTrigger(“requests”, Connection = “ConnectionStrings:QUEUE_CONNECTION_STRING”)] string message)
{
try
{
using (var httpClient = new HttpClient())
{
var content = new StringContent(message, Encoding.UTF8, “application/json”);
var httpResponse = await httpClient.PostAsync(_configuration[“LOCAL_SERVICE_URL_CANCELL”], content);
if (httpResponse.IsSuccessStatusCode)
{
return new OkObjectResult(“Data sent to backend”);
}
else
{
return new BadRequestObjectResult(“Backend not available”);
}
}
}
catch (Exception ex) {
_logger.LogError(ex.Message);
return new BadRequestObjectResult(“Backend not available”);
}
}{
“version”: “2.0”,
“logging”: {
“applicationInsights”: {
“samplingSettings”: {
“isEnabled”: false,
“excludedTypes”: “Request”
},
“enableLiveMetricsFilters”: true
}
},
“logLevel”: {
“default”: “Information”,
“Host.Results”: “Information”,
“functions”: “Information”,
“Host.Aggregator”: “Information”
},
“extensions”: {
“queues”: {
“maxPollingInterval”: “00:00:02”,
“visibilityTimeout”: “00:00:30”,
“batchSize”: 16,
“maxDequeueCount”: 5,
“newBatchThreshold”: 8,
“messageEncoding”: “base64”
}
}