Author: PuTI
SQL 2008 error
A Transport-level error has occurred when receiving results from the server. Provider TCP provider error 0 The specified network name is no longer available.
This error has started occurring today on all clients having MS-SQL 2008. Few computers are linking but rest are not.
What could be the reason? Kindly explain.
A Transport-level error has occurred when receiving results from the server. Provider TCP provider error 0 The specified network name is no longer available. This error has started occurring today on all clients having MS-SQL 2008. Few computers are linking but rest are not. What could be the reason? Kindly explain. Read More
Issue with Rendering contentURL in invoking stageView of Teams Bot
I am currently facing an issue with rendering a contentURL in the modal view within the multistage view of my bot. I have implemented the following Adaptive Card, but the contentURL is not being rendered as expected. Instead, I’m getting the error message: “There was a problem reaching this app.” The same URL works correctly in a browser window.
Here is the sample Adaptive Card:
{
“type”: “Image”,
“url”: “ImageURL”,
“altText”: “Insights”,
“selectAction”: {
“type”: “Action.Submit”,
“title”: “Open”,
“data”: {
“msteams”: {
“type”: “invoke”,
“value”: {
“type”: “tab/tabInfoAction”,
“tabInfo”: {
“contentUrl”: “https://apiserver.ngrok.app/test”,
“websiteUrl”: “https://apiserver.ngrok.app/test”,
“name”: “utterance”,
“entityId”: “entityId”,
“openMode”: “modal”
}
}
}
}
}
}
I am currently facing an issue with rendering a contentURL in the modal view within the multistage view of my bot. I have implemented the following Adaptive Card, but the contentURL is not being rendered as expected. Instead, I’m getting the error message: “There was a problem reaching this app.” The same URL works correctly in a browser window. Here is the sample Adaptive Card: {
“type”: “Image”,
“url”: “ImageURL”,
“altText”: “Insights”,
“selectAction”: {
“type”: “Action.Submit”,
“title”: “Open”,
“data”: {
“msteams”: {
“type”: “invoke”,
“value”: {
“type”: “tab/tabInfoAction”,
“tabInfo”: {
“contentUrl”: “https://apiserver.ngrok.app/test”,
“websiteUrl”: “https://apiserver.ngrok.app/test”,
“name”: “utterance”,
“entityId”: “entityId”,
“openMode”: “modal”
}
}
}
}
}
} Read More
Microsoft Tools for Small and Medium Businesses AMA
Join the Microsoft Tools for Small and Medium Businesses Ask Me Anything (AMA) on Wednesday, September 25th, from 9:00 AM to 10:00 AM PT.
This event is an opportunity to connect with Microsoft experts who can answer questions about how to utilize Microsoft tools to enhance your small or medium businesses.
Bryan Allen (Director, SMB Product Marketing, Microsoft 365) will be on hand to answer your SMB M365 related questions like how to simplify communication across your business and improve cross-team collaboration with M365.
Join the Microsoft Tools for Small and Medium Businesses Ask Me Anything (AMA) on Wednesday, September 25th, from 9:00 AM to 10:00 AM PT.
This event is an opportunity to connect with Microsoft experts who can answer questions about how to utilize Microsoft tools to enhance your small or medium businesses.
Bryan Allen (Director, SMB Product Marketing, Microsoft 365) will be on hand to answer your SMB M365 related questions like how to simplify communication across your business and improve cross-team collaboration with M365.
RSVP now!
Read More
Run-time error ‘5’: Invalid procedure call or argument
Hi everyone, I am trying to execute a macro for formatting some data I have as well as creating a workbook. If I am being honest, I do not have much experience with macros and used chatgpt as a reference. Chatgpt gave me the below code, which did work at first, but now I am having the error I mentioned above. The purpose of this macro is to do some formatting first, then rename that sheet to whatever today’s date is “mmddyy.” Then it should add a new sheet and name it “Data Pivot” which should then be moved to the beginning so it is first. At cell A1 there should be a pivot table which references the data in my previous sheet. My data is located at row 3 from cells A to I. I need my pivot table to select all filled cells from A3 down and up until column I. The issue I am having is that my macro is giving me an error once it is about to insert the pivot table at cell A1.
When I click debug this is where the error is:
Set pivotTbl = pivotWs.PivotTables.Add(PivotCache:=ThisWorkbook.PivotCaches.Create( _
SourceType:=xlDatabase, SourceData:=dataRange), TableDestination:=pivotWs.Cells(1, 1))
Please help!
This is my code:
Sub CreateReport()
Dim ws As Worksheet
Dim pivotWs As Worksheet
Dim pivotTbl As PivotTable
Dim dataRange As Range
Dim todaysDate As String
‘ Step 1: Delete column J
Columns(“J:J”).Delete
‘ Step 2: Add 2 new rows to the top
Rows(“1:2”).Insert Shift:=xlDown
‘ Step 3: Type “PB Charge Review by WQ” in cell A1
Range(“A1”).Value = “PB Charge Review by WQ”
‘ Step 4: Merge cells A1 and B1
Range(“A1:B1”).Merge
‘ Step 5: Bold cell A1 and make the font 12 pt
With Range(“A1”)
.Font.Bold = True
.Font.Size = 12
End With
‘ Step 6: Align the text in cell A1 to the left
Range(“A1”).HorizontalAlignment = xlLeft
‘ Step 7: Underline cells A3:I3 (headers)
Range(“A3:I3”).Font.Underline = xlUnderlineStyleSingle
‘ Step 8: Autofit columns A-I
Columns(“A:I”).AutoFit
‘ Step 9: Rename the sheet to today’s date (mmddyy)
todaysDate = Format(Date, “mmddyy”)
ActiveSheet.Name = todaysDate
‘ Step 10: Add a new sheet and move it to the beginning
Set pivotWs = Sheets.Add(Before:=Sheets(1))
‘ Step 11: Rename this new sheet to “Data Pivot”
pivotWs.Name = “Data Pivot”
‘ Step 12: Set the data range for the pivot table
Set ws = Sheets(todaysDate)
Set dataRange = ws.Range(“A3:I” & ws.Cells(ws.Rows.Count, “A”).End(xlUp).Row)
‘ Step 13: Create the pivot table in the new sheet
Set pivotTbl = pivotWs.PivotTables.Add(PivotCache:=ThisWorkbook.PivotCaches.Create( _
SourceType:=xlDatabase, SourceData:=dataRange), TableDestination:=pivotWs.Cells(1, 1))
‘ Step 14-19: Add fields to the pivot table and format them
With pivotTbl
‘ Add “Owning Area” to rows
.PivotFields(“Owning Area”).Orientation = xlRowField
‘ Add “Num of Chg Sess” to values as a sum
With .PivotFields(“Num of Chg Sess”)
.Orientation = xlDataField
.Function = xlSum
End With
‘ Add “Amt on Chg Rvw” to values as a sum
With .PivotFields(“Amt on Chg Rvw”)
.Orientation = xlDataField
.Function = xlSum
End With
‘ Add “Avg Svc Dt Age” to values as an average
With .PivotFields(“Avg Svc Dt Age”)
.Orientation = xlDataField
.Function = xlAverage
End With
‘ Add “Avg Age” to values as an average
With .PivotFields(“Avg Age”)
.Orientation = xlDataField
.Function = xlAverage
End With
End With
‘ Step 20: Format columns B, D, and E with custom number format to show dashes instead of zeros
pivotWs.Columns(2).NumberFormat = “#,##0;-#,##0;–”
pivotWs.Columns(4).NumberFormat = “#,##0;-#,##0;–”
pivotWs.Columns(5).NumberFormat = “#,##0;-#,##0;–”
‘ Step 21: Format column C as currency with no decimals
pivotWs.Columns(3).NumberFormat = “$#,##0”
End Sub
Hi everyone, I am trying to execute a macro for formatting some data I have as well as creating a workbook. If I am being honest, I do not have much experience with macros and used chatgpt as a reference. Chatgpt gave me the below code, which did work at first, but now I am having the error I mentioned above. The purpose of this macro is to do some formatting first, then rename that sheet to whatever today’s date is “mmddyy.” Then it should add a new sheet and name it “Data Pivot” which should then be moved to the beginning so it is first. At cell A1 there should be a pivot table which references the data in my previous sheet. My data is located at row 3 from cells A to I. I need my pivot table to select all filled cells from A3 down and up until column I. The issue I am having is that my macro is giving me an error once it is about to insert the pivot table at cell A1. When I click debug this is where the error is: Set pivotTbl = pivotWs.PivotTables.Add(PivotCache:=ThisWorkbook.PivotCaches.Create( _
SourceType:=xlDatabase, SourceData:=dataRange), TableDestination:=pivotWs.Cells(1, 1)) Please help! This is my code: Sub CreateReport()
Dim ws As Worksheet
Dim pivotWs As Worksheet
Dim pivotTbl As PivotTable
Dim dataRange As Range
Dim todaysDate As String
‘ Step 1: Delete column J
Columns(“J:J”).Delete
‘ Step 2: Add 2 new rows to the top
Rows(“1:2”).Insert Shift:=xlDown
‘ Step 3: Type “PB Charge Review by WQ” in cell A1
Range(“A1”).Value = “PB Charge Review by WQ”
‘ Step 4: Merge cells A1 and B1
Range(“A1:B1”).Merge
‘ Step 5: Bold cell A1 and make the font 12 pt
With Range(“A1”)
.Font.Bold = True
.Font.Size = 12
End With
‘ Step 6: Align the text in cell A1 to the left
Range(“A1”).HorizontalAlignment = xlLeft
‘ Step 7: Underline cells A3:I3 (headers)
Range(“A3:I3”).Font.Underline = xlUnderlineStyleSingle
‘ Step 8: Autofit columns A-I
Columns(“A:I”).AutoFit
‘ Step 9: Rename the sheet to today’s date (mmddyy)
todaysDate = Format(Date, “mmddyy”)
ActiveSheet.Name = todaysDate
‘ Step 10: Add a new sheet and move it to the beginning
Set pivotWs = Sheets.Add(Before:=Sheets(1))
‘ Step 11: Rename this new sheet to “Data Pivot”
pivotWs.Name = “Data Pivot”
‘ Step 12: Set the data range for the pivot table
Set ws = Sheets(todaysDate)
Set dataRange = ws.Range(“A3:I” & ws.Cells(ws.Rows.Count, “A”).End(xlUp).Row)
‘ Step 13: Create the pivot table in the new sheet
Set pivotTbl = pivotWs.PivotTables.Add(PivotCache:=ThisWorkbook.PivotCaches.Create( _
SourceType:=xlDatabase, SourceData:=dataRange), TableDestination:=pivotWs.Cells(1, 1))
‘ Step 14-19: Add fields to the pivot table and format them
With pivotTbl
‘ Add “Owning Area” to rows
.PivotFields(“Owning Area”).Orientation = xlRowField
‘ Add “Num of Chg Sess” to values as a sum
With .PivotFields(“Num of Chg Sess”)
.Orientation = xlDataField
.Function = xlSum
End With
‘ Add “Amt on Chg Rvw” to values as a sum
With .PivotFields(“Amt on Chg Rvw”)
.Orientation = xlDataField
.Function = xlSum
End With
‘ Add “Avg Svc Dt Age” to values as an average
With .PivotFields(“Avg Svc Dt Age”)
.Orientation = xlDataField
.Function = xlAverage
End With
‘ Add “Avg Age” to values as an average
With .PivotFields(“Avg Age”)
.Orientation = xlDataField
.Function = xlAverage
End With
End With
‘ Step 20: Format columns B, D, and E with custom number format to show dashes instead of zeros
pivotWs.Columns(2).NumberFormat = “#,##0;-#,##0;–”
pivotWs.Columns(4).NumberFormat = “#,##0;-#,##0;–”
pivotWs.Columns(5).NumberFormat = “#,##0;-#,##0;–”
‘ Step 21: Format column C as currency with no decimals
pivotWs.Columns(3).NumberFormat = “$#,##0”
End Sub
Read More
SharePoint list item highlighted for no apparent reason
Hi folks – I have a modern Hub on 365 online. I am the tenant and site admin. We just noticed today that one of the items in one of our lists is highlighted with a dark outline. The item isn’t selected, it just appears with this boarder for no apparent reason. See attached image. Confirmed that it’s appearing this way for multiple users. Just started today… only happening on the one item.
Any ideas on what could be causing this?
Hi folks – I have a modern Hub on 365 online. I am the tenant and site admin. We just noticed today that one of the items in one of our lists is highlighted with a dark outline. The item isn’t selected, it just appears with this boarder for no apparent reason. See attached image. Confirmed that it’s appearing this way for multiple users. Just started today… only happening on the one item. Any ideas on what could be causing this? Read More
Protect and Detect: Microsoft Defender for Identity Expands to Entra Connect Server
We are excited to announce a new Microsoft Defender for Identity sensor for Entra Connect servers. This addition is a significant step in our ongoing commitment to expanding Defender for Identity’s coverage across hybrid identity environments. It reinforces our vision of overseeing and protecting the entire identity fabric, greatly enhancing the SOC’s visibility and protections for these complex environments.
Identities are one of, if not the most targeted attack vector and cyber-criminals are always evolving their strategies to exploit new vulnerabilities or gaps in protection. Many organizations today manage hybrid identity environments, with an on-premises Active Directory footprint along with a deployment of Entra ID in the cloud. The gaps between these two elements present ample opportunities for bad actors and as the primary bridge between them, Entra Connect servers can be classified as tier-0 level assets.
To stay ahead of emerging threats and deliver powerful security solutions, our team is continuously evolving and updating our offerings, and the primary objective of this new sensor is to help our customers better prevent, detect, and remediate credential theft and privilege escalation attacks commonly initiated against Entra Connect.
What is Entra Connect? What security value does the new sensor provide?
Entra Connect (previously known as Azure AD Connect or AAD Connect) is a Microsoft service used to synchronize on-premises Active Directory environments with Entra ID (formerly Azure Active Directory). Entra Connect facilitates identity management and provides single sign-on capabilities for users across on-premises and cloud resources by creating a common identity. This synchronization is essential for maintaining consistent and secure access across different platforms.
The new Microsoft Defender for Identity sensor for Entra Connect servers provides comprehensive monitoring of synchronization activities between Entra Connect and Active Directory, offering crucial insights into potential security threats and unusual activities. With this enhanced visibility across hybrid identity environments Defender for Identity can now provide new Entra Connect specific security alerts and posture recommendations, as detailed below.
New Detections (in Public Preview):
Suspicious Interactive Logon to the Entra Connect Server:
Direct logins to Entra Connect servers are highly unusual and potentially malicious. Attackers often target these servers to steal credentials for broader network access. Microsoft Defender for Identity can now detect abnormal logins to Entra Connect servers, helping you identify and respond to these potential threats faster. It is specifically applicable when the Entra Connect server is a standalone server and not operating as a Domain Controller.
Pre-requisite: Ensure that the 4624 logon event is enabled on the Entra Connect server. This step is necessary only if the Entra Connect server is not functioning as a Domain Controller.
User Password Reset by Entra Connect Account:
The Entra Connect connector account often holds high privileges, including the ability to reset user’s passwords. Microsoft Defender for Identity now has visibility into those actions and will detect any usage of those permissions that were identified as malicious and non-legitimate. This alert will be triggered only if the password writeback feature is disabled.
Suspicious writeback by Entra Connect on a sensitive user:
While Entra Connect already prevents writeback for users in privileged groups, Microsoft Defender for Identity expands this protection by identifying additional types of sensitive accounts. This enhanced detection helps prevent unauthorized password resets on critical accounts, which can be a crucial step in advanced attacks targeting both cloud and on-premises environments.
Additional improvements and capabilities:
New activity of any failed password reset on a sensitive account available in the ‘IdentityDirectoryEvents’ table in Advanced Hunting. This can help customers track failed password reset events and create custom detection based on this data.
Enhanced accuracy for the DC sync attack detection.
New health alert for cases where the sensor is unable to retrieve the configuration from the Entra Connect service.
Extended monitoring for security alerts, such as PowerShell Remote Execution Detector, by enabling the new sensor on Entra Connect servers.
New posture recommendations in Microsoft Secure Score (Identity security assessment):
Rotate password for Entra Connect connector account:
A compromised Entra Connect connector account (AD DS connector account, commonly shown as MSOL_XXXXXXXX) can grant access to high-privilege functions like replication and password resets, allowing attackers to modify synchronization settings and compromise security in both cloud and on-premises environments as well as offering several paths for compromising the entire domain. In this assessment we recommend customers change the password of MSOL accounts with the password last set over 90 days ago.
Remove unnecessary replication permissions for Entra Connect Account:
By default, the Entra Connect connector account has extensive permissions to ensure proper synchronization (even if they are not actually required). If Password Hash Sync is not configured, it’s important to remove unnecessary permissions to reduce the potential attack surface.
Change password for Entra seamless SSO account configuration:
This report lists all Entra seamless SSO computer accounts with password last set over 90 days ago. The password for the Azure SSO computer account is not automatically changed every 30 days. If an attacker compromises this account, they can generate service tickets for the AZUREADSSOACC account on behalf of any user and impersonate any user in the Entra tenant that is synchronized from Active Directory. An attacker can use this to move laterally from Active Directory into Entra ID.
Remove Resource Based Constrained Delegation for Entra seamless SSO account:
If resource-based constrained delegation is configured on the AZUREADSSOACC computer account, an account with the delegation would be able to generate service tickets for the AZUREADSSOACC account on behalf of any user and impersonate any user in the Entra tenant that is synchronized from AD.
All new recommendations require a sensor installed on servers running Entra Connect services.
The recommendations related to the Entra seamless SSO account will be available only if Defender for Identity can detect this type of computer account, while the recommendations related to the connector account will be available if our sensor recognized a retrieve configuration data from the Entra Connect services.
As cyber threats become more sophisticated, the need for advanced security solutions is more pressing than ever. The new sensor for Entra Connect Server within Microsoft Defender for Identity represents a significant advancement in protecting and detecting threats within the identity fabric. By adopting this powerful tool, organizations can enhance their security posture, safeguard their identity infrastructure, and maintain the trust of their users.
We highly recommend customers install a sensor on any Domain controller, AD CS, AD FS, or Entra Connect servers. In the upcoming weeks, our team will delve deeper into potential enhancements for the Entra Connect support to better help organizations stay secure and protected.
Learn more about the new sensor in our documentation here and stay tuned for more updates and insights on how MDI continues to innovate in the realm of cybersecurity, ensuring that your organization remains secure in an ever-changing digital world.
Microsoft Tech Community – Latest Blogs –Read More
Copilot are we besties? Part 2
Previously, I shared an article about how Copilot has greatly benefited me as an instructional designer and eLearning developer.
I now use Copilot in about 90% of my work, significantly improving my efficiency and reducing my time for development. A major feature is the messaging prompt, which helps organize my thoughts and reference documents to clearly express ideas. Even my emails and requests to SMEs have become more detailed thanks to Copilot.
Recently, I was assigned a research project that required extracting information from documents and formulating my findings and ideas. The process that usually takes weeks was reduced to just days, thanks to the help of my friend Copilot.
I won’t turn this into an extended critique, but if you’re not using Copilot, you’re missing out. I rely on Copilot alongside every MS application I use to work more efficiently and generate ideas when I’m stuck.
Make Copilot your indispensable partner and Bestie for life.
Microsoft Tech Community – Latest Blogs –Read More
Monitoring LLM Inference Endpoints with LLM Listeners
In this second blog post in the series, guest blogger Martin Bald, Sr. Manager DevRel and Community at Microsoft Partner Wallaroo.AI, will go through the steps to easily operationalize LLM models and put in place measures to help ensure model integrity, and the staples of security, privacy, compliance for avoiding outputs such as toxicity, hallucinations etc.
Introduction
With the emergence of GenAI and services associated with it such as ChatGPT, enterprises started to feel the pressure to quickly implement GenAI to make sure they are not left behind in the race towards broad enterprise AI adoption.
That said, when talking to our customers and partners, the adoption has not been a smooth ride due to the fact that we underestimate the time it will typically take to get to effective and reliable LLMs. For those of you who might not know, it took OpenAI 2 years of testing before launching ChatGPT.
For AI practitioners, understanding the intricacies of bringing these powerful models into production environments is essential for building robust, high-performing AI systems.
LLM Monitoring with Listeners in Wallaroo
As we covered in the previous blog post on RAG LLM, any LLM deployed to production is not the end of the process. Far from it. Models must be monitored for performance to ensure they are performing optimally and producing the results that they are intended for.
With LLMs, proactive monitoring is critical. We have seen some very public situations where quality, and accuracy through things like hallucinations and toxic outputs have led to lawsuits and loss of credibility and trust for businesses.
Using RAG is not the only method that is available to AI Teams to make sure that LLMs are generating effective and accurate text. There may be certain use cases or compliance and regulatory rules that restrict the use of RAG. LLM accuracy and integrity can still be accomplished through the validation and monitoring components that we at Wallaroo.AI call LLM Listeners.
We came up with this concept of LLM Listeners after working with some of our customers who were doing this in the context of traditional ML where they were using different modalities or different customer interactions that were related to audio scenarios. Primarily for calls where the models would look for specific information on the call to gather sentiment and things like that.
With our customers shifting towards LLMs as the interaction method for their customers, the same monitoring and models that were in place remained relevant for them. Together with our customers, we came up with this concept of an LLM listener which is essentially a set of models that we build and offer off the shelf that can be customizable to detect and monitor certain behaviors such as toxicity, harmful language etc.
You may be looking to generate an alert for poor quality responses immediately or even autocorrect that behavior from the LLM that can be done in-line. It can also be utilized offline if you’re looking to do some further analysis on the LLM interaction. This is especially useful if it’s something that is done in a more controlled environment. For example, you can be doing this in a RAG setting and add these validation and monitoring steps on top of that.
The LLM Listeners can also be orchestrated to generate real-time monitoring reports and metrics to understand how your LLM is behaving and ensure that it’s effective in production which helps drive the time to value for the business. You can also iterate on the LLM Listener and keep the endpoint static while everything that happens behind it can remain fluid to allow AI teams to iterate quickly on the LLMs without impacting the bottom line which could be your business reputation, revenue costs, customer satisfaction etc.
LLM Listeners with Wallaroo in Action
Let’s have a look at how these LLM Listeners work and how easy it is to deploy into production.
Fig -1
LLM Listener approach illustrated in Fig -1 is implemented as follows:
1: Input text from application and corresponding generated text
2: We provide a service where you can have your LLM inference endpoint and
3: We will log the interactions between the LLM inference endpoint and your users. We can see the input text and corresponding generated text from there.
4: The logs can be monitored by a suite of listener models, and these can be anything from standard processes to other NLP models that are monitoring these outputs inline or offline. You can think of them as things like sentiment analyzers or even full systems that check against some ground truth.
5: The LLM listeners are going to score your LLM interactions on a variety of factors and can be used to start to generate automated reporting and alerts in cases where, over time, behavior is changing or some of these scores start to fall out of acceptable ranges.
In addition to the passive listening that you see here where these listeners are monitoring for macro level behaviors occurring over the course of many interactions, we also have the ability to deploy these listeners in line to ride alongside the LLM and actually give it the ability to suppress outputs that violate these thresholds from going out the door in the first place
Now let’s see an example of this in action. You can follow this example from the LLM Monitoring docs page.
The following shows running the LLM Listener as a Run Once task via the Wallaroo SDK that evaluates the llama3-instruct LLM. The LLM Listener arguments can be modified to evaluate any other deployed LLMs with their own text output fields.
This assumes that the LLM Listener was already uploaded and is ready to accept new tasks, and we have saved it to the variable llm_listener.
Here we create and orchestrate the llm monitoring task for the LLM Listener and provide it the deployed LLM’s workspace and pipeline, and the LLM Listener’s models workspace and name.
args = {
‘llm_workspace’ : ‘llm-models’ ,
‘llm_pipeline’: ‘llamav3-instruct’,
‘llm_output_field’: ‘out.generated_text’,
‘monitor_workspace’: ‘llm-models’,
‘monitor_pipeline’ : ‘full-toxmonitor-pipeline’,
‘window_length’: -1, # in hours. If -1, no limit (for testing)
‘n_toxlabels’: 6,
}
task = llm_listener.run_once(name=”sample_monitor”, json_args=args, timeout=1000)
Next, we’ll list out the tasks from a Wallaroo client saved to wl, and verify that the task finished with Success.
wl.list_tasks()
Fig 2.
With this task completed, we will check the LLM Listener logs and use the evaluation fields to determine if there are any toxicity issues, etc.
llm_evaluation_results = llm_listener_pipeline.logs()
display(llm_evaluation_results)
This gives us an output similar to the truncated Fig 3. example below. Notice the toxicity column headings and scoring for Insult, Obscene, and Severe Toxic.
Fig. 3
Once a task is completed, the results are available. The Listener’s inference logs are available for monitoring through Wallaroo assays.
From the Assay output chart below, we can see periods where my toxicity values are within the normal bounds’ threshold Fig. 4. and we can click into them to see what those interactions look like in Fig. 5.
Fig. 4
Fig. 5
We can also see periods where the output has exceeded the normal threshold and have an outlier here in Fig. 6.
Fig. 6
And from the above chart we can drill into a more detailed view in Fig. 7.
Fig. 7
In addition to this we can drill deeper into the logs and can actually look at this period in more detail and even see individual audit logs of the particular interactions that are going to allow us to say exactly what our model output is and exactly what the scores here were across those variety of metrics from insulting to obscene language threatening language etc. as seen in Fig. 8.
Fig. 8
Conclusion:
LLM Listeners are just one of the LLM monitoring methods available for LLMOps that help ensure that LLMs in production are robust and effective in post-production by implementing monitoring metrics and alerts. Using LLM Listeners for potential issues such as toxicity, obscenity, etc. to avoid risks safeguards accurate and relevant outputs.
As mentioned at the beginning Wallaroo is actively working on building out a suite of these listeners and partnering with customers to build out listeners that are specific to their applications and use cases.
Wallaroo LLM Operations Docs: https://docs.wallaroo.ai/wallaroo-llm/
Request a Demo: https://wallaroo.ai/request-a-demo/
____________________________________________________________________________
Microsoft Tech Community – Latest Blogs –Read More
The new Microsoft Planner: What’s New and What’s Coming Next
Join the Microsoft Planner product team on Tuesday, September 17 at 9:00 AM Pacific to learn about recent updates to the Planner app, and get a sneak peek at what’s coming next, followed by a live Ask Microsoft Anything. Our product experts will showcase key new features and improvements of the new experience, including demonstrations of how to leverage Copilot in Planner, new Baselines to stay on track, web app updates, integration with Whiteboard, and more. The event includes a live Q&A session where you can ask the team any questions about these updates.
The AMA event on Tuesday is your chance to see the latest features of Planner and hear directly from the product team. This 60-minute session is an opportunity to ask open questions and provide feedback about the new Microsoft Planner. Register today!
Please note, if you cannot attend the live AMA, you can ask questions at any time via the event page—see Comments section. Our team will address all questions during the event, so check back for answers.
Microsoft Tech Community – Latest Blogs –Read More
Save ingestion costs by splitting logs into multiple tables and opting for the basic tier!
In this blog post I am going to talk about splitting logs to multiple tables and opting for basic tier to save cost in Microsoft Sentinel. Before we delve into the details, let’s try to understand what problem we are going to solve with this approach.
Azure Monitor offers several log plans which our customers can opt for depending on their use cases. These log plans include:
Analytics Logs – This plan is designed for frequent, concurrent access and supports interactive usage by multiple users. This plan drives the features in Azure Monitor Insights and powers Microsoft Sentinel. It is designed to manage critical and frequently accessed logs optimized for dashboards, alerts, and business advanced queries.
Basic Logs – Improved to support even richer troubleshooting and incident response with fast queries while saving costs. Now available with a longer retention period and the addition of KQL operators to aggregate and lookup.
Auxiliary Logs – Our new, inexpensive log plan that enables ingestion and management of verbose logs needed for auditing and compliance scenarios. These may be queried with KQL on an infrequent basis and used to generate summaries.
Following diagram provides detailed information about the log plans and their use cases:
I would also recommend going through our public documentation for detailed insights about feature-wise comparison for the log plans which should help you in taking right decisions for choosing the correct log plans.
**Note** Auxiliary logs are out of scope for this blog post, I will write a separate blog on the Auxiliary logs later.
So far, we know about different log plans available and their use cases.
The next question is which tables support Analytics and Basic log plan?
Analytics Logs: All tables support the Analytics plan.
Basic Logs: All DCR-based custom tables and some Azure tables support the Basic log plan.
You can switch between the Analytics and Basic plans; the change takes effect on existing data in the table immediately.
When you change a table’s plan from Analytics to Basic, Azure monitor treats any data that’s older than 30 days as long-term retention data based on the total retention period set for the table. In other words, the total retention period of the table remains unchanged, unless you explicitly modify the long-term retention period.
Check our public documentation for more information on setting the table plan.
I will focus on splitting Syslog table and setting up the DCR-based table to Basic tier in this blog.
Typically Firewall logs contribute to high volume of log ingestion to a SIEM solution.
In order to manage cost in Microsoft Sentinel its highly recommended to thoroughly review the logs and identify which logs can be moved to Basic log plan.
At a high level, the following steps should be enough to achieve this task:
Ingest Firewall logs to Microsoft Sentinel with the help of Linux Log Forwarder via Azure Monitor Agent.
Assuming the log is getting ingested in Syslog table, create a custom table with same schema as Syslog table.
Update the DCR template to split the logs.
Set the table plan to Basic for the identified DCR-based custom table.
Set the required retention period of the table.
At this point, I anticipate you already have log forwarder set up and able to ingest Firewall logs to Microsoft Sentinel’s workspace.
Let’s focus on creating a custom table now
This part used to be cumbersome but not anymore, thanks to my colleague Marko Lauren who has done a fantastic job in creating this PowerShell Script which can create a custom table easily. All you need to do is to enter the pre-existing table name and the script will create a new DCR-Based custom table with same schema.
Let’s see it in action:
Download the script locally.
Open the script in PowerShell ISE and update workspace ID & resource ID details as shown below.
Save it locally and upload to Azure PowerShell.
Load the file and enter the table name from which you wish to copy the schema.
Provide the new table name as per your wish, ensure the name has suffix “_CL” as shown below:
This should create a new DCR-based custom table which you can check in Log Analytics Workspace > Table blade as shown below:
**Note** We highly recommend you should review the PowerShell script thoroughly and do proper testing before executing it in production. We don’t take any responsibility for the script.
The next step is to update the Data Collection Rule template to split the logs
Since we already created custom table, we should create a transformation logic to split the logs and send less relevant log to the custom table which we are going to set to Basic log tier.
For demo purposes, I’m going to split logs based on SeverityLevel. I will drop “info” logs from Syslog table and stream it to Syslog_CL table.
Let’s see how it works:
Browse to Data Collection Rule blade.
Open the DCR for Syslog table, click on Export template > Deploy > Edit Template as shown below:
In the dataFlows section, I’ve created 2 streams for splitting the logs. Details about the streams as follows:
1st Stream: It’s going to drop the Syslog messages where SeverityLevel is “info” and send the logs to Syslog table.
2nd Stream: It’s going to capture all Syslog messages where SeverityLevel is “info” and send the logs to Syslog_CL table.
Save and deploy.
Let’s validate if it really works
Go to the Log Analytics Workspace > Logs and check if the tables contains the data which we have defined it for.
In my case as we can see, Syslog table contains all logs except those where SeverityLevel is “info”
Additionally, our custom table: Syslog_CL contains those Syslog data where SeverityLevel is “info”
Now the next part is to set the Syslog_CL table to Basic log plan
Since Syslog_CL is a DCR-based custom table, we can set it to Basic log plan. Steps are straightforward:
Go to the Log Analytics Workspace > Tables
Search for the table: Syslog_CL
Click on the ellipsis on the right side and click on Manage table as shown below:
Select the table plan to Basic and set desired retention period
Save the settings.
Now you can enjoy some cost benefits, hope this helps.
Microsoft Tech Community – Latest Blogs –Read More
GitHub Copilot and SSMA: Strap a GenAI conversion booster to your Oracle to SQL Migrations
Overview
In this blog post we will see demonstration and detailed walk through of how Generative AI capabilities of GitHub Copilot can work together with SQL Server Migration Assistant (SSMA) for Oracle and accelerate code conversion from PL/SQL to T-SQL and simplify Oracle migration journey to Azure SQL. Before we delve into how GitHub Copilot can accelerate your code conversion journey, lets get a brief overview of GitHub Copilot, SSMA for Oracle, Database migrations and the criticality of code conversion in the migration process.
What is GitHub Copilot?
GitHub Copilot is an AI coding assistant that helps developers write code faster with less effort, allowing developers to focus on problem solving and collaboration. It improves developer productivity by doing completions, answering coding questions, fix issues, generate unit test cases, jumpstarting your project and much more. GitHub Copilot also has Language Translation ability that can translate code from one Programming language to another, for example: Python to JavaScript or HTML to Markdown, PL/SQL to T-SQL etc. In this demo we will see how GitHub Copilot Language translation capability can simplify your Oracle to SQL Server database migration by automatically converting PL/SQL into T-SQL.
GitHub Copilot is available as an extension in IDEs, GitHub Mobile as a chat interface, on command line as GitHub CLI and more. In this demo we are going to use Visual Studio Code extension of GitHub Copilot.
What is SQL Server Migration Assistant (SSMA) for Oracle
Microsoft SQL Server Migration Assistant (SSMA) for Oracle is a desktop tool to automate migration from Oracle database(s) to SQL Server, Azure SQL Database, Azure SQL Database Managed Instance and Azure SQL Data Warehouse. SSMA for Oracle converts Oracle database objects and loads those objects into SQL Server or Azure SQL, and then migrates data. For more information on how to use SSMA for Oracle, please refer to SQL Server Migration Assistant for Oracle.
Database Migrations overview
Database migration is a process of moving data from one or more source platforms to the desired target platforms. Data migration can happen between databases of the same database management system (DBMS) from the same provider or between databases from different database management system (DBMS) providers. For example, Migration of SQL Servers from on-premises infrastructure or non-Azure cloud platforms to Azure SQL (which includes following three products: Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM) is called Homogenous Migrations and Migration of non-SQL server databases like Oracle, DB2, Sybase etc. to Microsoft SQL Server or Azure SQL is called Heterogenous migrations. Both homogenous and heterogenous are a multi-phase journey with the following phases:
Discovery: Users must first Discover their entire source database estate either in on-prem or in other clouds and determine which of these databases need to be migrated.
Total Cost of Ownership (TCO) comparison analysis between source and target platforms to quantify the potential cost savings by migrating the databases.
Assess the source databases to understand the workload patterns and determine the right configuration of the target database and provision the target.
Convert the code and other source database objects to make them target compatible.
Migrate the data from source to target databases either.
Conversion using SQL Server Migration Assistant for Oracle
SQL Server Migration Assistant for Oracle provides extensive conversion rule set engine that converts most of your Oracle objects into PL/SQL code into SQL Server compatible objects and T-SQL with 100% accuracy. Additionally, SSMA provides multiple reusable customization options for mapping datatypes and extending inbuilt rule engine that help you accelerate the overall code conversion process. High-level steps for conversion in SSMA are:
Mapping Oracle and SQL Server data types: SSMA for Oracle offers a default set of type mappings, which meets common conversion requirements in most of the cases. This data type mapping is inherited by default at project level for all the underlying object categories and object types. Users can customize them as needed at object category level and create exceptions
Assessing Oracle Schemas for conversion: Before loading objects and migrate data to SQL Server, you should determine how complex the migration will be and how much time the migration will take. SSMA for Oracle creates an assessment report that shows the percentage of objects that will be successfully converted, and it also lets you view the specific issues that cause conversion failures. Additionally, SSMA also tells you the amount of manual effort required in hours to convert the objects that could not be automatically converted.
Converting Oracle Schemas into SQL Server Schemas: Converting database objects takes the object definitions from Oracle, converts them to similar SQL Server objects, and then loads this information into the SSMA metadata. It does not load the information into the instance of SQL Server. You can then view the objects and their properties by using the SQL Server Metadata Explorer. During the conversion, SSMA prints output messages to the Output pane and error messages to the Error List pane. Use the output and error information to determine whether you have to modify your Oracle databases or your conversion process to obtain the desired conversion results.
Loading converted database objects into SQL Server: To load the converted database objects into SQL Server without modification, you can have SSMA directly create or recreate the database objects. To modify the Transact-SQL that is used to create objects for more control over object creation, use SSMA to create scripts. You can then modify those scripts to create each object individually, and even use SQL Server Agent to schedule creating those objects. To secure the converted database objects in SQL Server, you can grant and deny permissions on those objects. It is recommended to set the security permissions before performing data migration.
More details about the Oracle to SQL Server migration and the conversion process can be found in Migrate Oracle to SQL Server (OracleToSQL)
GitHub Copilot, a great companion to SSMA in code conversion
SSMA for Oracle provides comprehensive conversion rule engine that converts majority of the datatypes and objects into SQL Server compatible type with 100 % accuracy. Objects that could not be converted automatically by SSMA, need to be converted manually and this can take multiple hours of manual effort. Users can leverage the full power of Generative AI capabilities available in GitHub Copilot to automate the conversion of Oracle database objects that could not be converted by SSMA for Oracle. GitHub Copilot is available as a Visual Studio extension, which facilitates conversion of large and complex Oracle procedures and functions to T-SQL procedures and functions with few clicks. Here is a step-by-step guide on how to leverage GitHub Copilot VS Code extension to automate the conversion:
Create and view the conversion assessment report generated by SSMA to know the list of all objects that could not be automatically converted by SSMA into SQL Server compatible objects. Here is a screenshot that shows how the assessment report would look like capturing details on number of objects and actual objects that could not be converted (Pie chart on the left) and the amount of manual effort required to convert these objects (Pie chart on the right):
Select an object (ex: a PL/SQL procedure or function) that could not be successfully converted by SSMA. As shown the screenshot below, we have selected get_employee_info () procedure which has ref cursor as a return type which is not supported directly in T-SQL
As a next step, copy the query, open GitHub Copilot extension in VSCode and paste it into a new file which is saved with .sql extension. In this case, I saved the file with the PL/SQL code as ora2sql.sql.
To setup GitHub Copilot in Visual Studio Code follow the instructions in: GitHub Copilot setup in VSCode.
After pasting the PL/SQL procedure in VS Code, hit Ctrl+I to invoke the GitHub Copilot inline chat that lets you ask questions or give specific commands in Natural Language. In the chat interface please type convert PL/SQL to T-SQL and hit enter.
In few seconds the entire PL/SQL code is rewritten to convert the return cursor type to a table type and the generated T-SQL function is also correct. We can either Accept or Discard the changes. In this case I will go ahead and accept the suggestion:
As a quick check, will copy the generated query into SSMS and see if it can be validated and run successfully:
With the T-SQL procedure validated, you can copy the generated T-SQL procedure into the SSMA project and synchronize/load it along with the other converted Oracle objects.
Here is the demo video that captures this entire scenario end to end:
GitHub Copilot can handle even more complex conversion scenarios and help you save lot of manual effort and time by converting them to correct T-SQL syntax with few clicks. Here is another demo video showcasing a complex PL/SQL package with inbuilt PL/SQL procedures and a user defined data type being converted to T-SQL automatically using GitHub Copilot:
To summarize in this blog post we have seen how the combination of SSMA’s rule-based conversion and GitHub Copilot’s AI-driven approach could significantly accelerate code conversion, potentially achieving high conversion success rates in the late 80s to 90s percentage range.
Microsoft Tech Community – Latest Blogs –Read More
Discover the power of Copilot prompts | New eBook
Are you ready to revolutionize the way you work with Microsoft 365? I’m excited to share a new eBook I’ve been working on that is now available as a free download: “Discover the power of Copilot prompts.” This is a comprehensive guide filled with insights from numerous experts about their practical prompts to elevate productivity and streamline their workflows within the Microsoft 365 ecosystem.
“One of my favorite use cases for Copilot in Bing is using it as a sparring partner for ideation and the development of initial ideas.” – Karoliina Kettukari, Head of Modern Work – Meltlake and Microsoft MVP [relating to Karoliina’s example within the eBook when using Copilot to help plan her team day … to seize the opportunity to learn and upskill.]
I believe in the power of our community and in all that we bring together. By sharing our knowledge, we make everyone wiser, enabling us to harness the full potential of Copilot for Microsoft365 more effectively. I enjoy learning from others. I enjoy sharing their learning and my own insights. Together we can achieve more and continuously improve how we work.
Get your free copy today: “Discover the power of Copilot prompts“
The value within the pages: Prompting for productivity
“Discover the Power of Copilot Prompts” isn’t just another tech guide. It’s a curated collection of the best prompts from top experts in the Microsoft community (the best community in tech, imo). I reached out to seasoned professionals, influencers, and Microsoft MVPs, asking them to share their favorite prompts that have transformed their productivity and efficiency. This book is the culmination of those insights, offering you real-world applications and innovative techniques that you won’t find anywhere else.
This eBook contains unique suggestions you won’t find anywhere else. Each expert shares how they use prompts and walk you through the favorite prompt – inclusive of the why and how they do it. It’s high time to try new patterns and practices of building good prompts. The output is refined and improved support, summarization, creation – for you.
What’s Inside?
Microsoft Copilot is a powerful tool designed to enhance task execution and boost productivity. It allows you to devise these prompts as you see fit. So, you can write your own queries and freely express your creativity and get the assistance you need. The eBook covers:
Prompting 101: Discover how it works and incorporate specific elements into your prompts to ensure they generate valuable responses. You just need practice (and a few pointers).
Exclusive Prompts: Detailed prompts applied within Microsoft 365 apps and services including Teams, Word, Excel, PowerPoint, Loop, and more.
Expert Tips: Hear from the best in the community. Each expert shares specific scenarios, what they wanted as an outcome, and how they “promptly” get there.
Guidance for setting priorities: Learn how well-crafted prompts can help you organize and prioritize tasks and make informed decisions.
Get inspired by new ideas and insights and don’t miss out on this opportunity to get your hands on your copy of “Discover the Power of Copilot Prompts.”
Thank you for your support. This is an incredible journey we are on together. Let me know what you think in the comments below; don’t hesitate to share any tips or tricks of your own. And if you are on LinkedIn, let me know your thoughts and help me get the word – reshare my eBook launch post.
Cheers and thank you for your support, Femke
About Femke
Femke Cornelissen is Chief Copilot at Wortell, a leading Microsoft partner in the Netherlands. In this role, she aligns the technical vision of the organization with its mission and strategy, and she highlights the latest developments and best practices in the field of Workplace and Productivity, with a focus on Microsoft 365. She has over four years of experience as a Business Consultant and a Team Leader helping her clients adopt and optimize their modern workplace solutions using Microsoft technologies.
She is also passionate about engaging with the Microsoft community and sharing my knowledge and insights on Microsoft 365. She is a Microsoft Most Valuable Professional (MVP) in Microsoft 365 Apps and Services, a recognition for her contributions and expertise in the Microsoft ecosystem. Femke volunteers at Experts Live Netherlands, a platform for Microsoft experts to network and exchange information and is a co-founder and a community manager of Dutch Women in Tech (DWIT), an initiative that supports and empowers more women to pursue a career in IT. Additionally, she hosts the podcast Cloud Conversations, where she interviews Microsoft community members and discusses their experiences and challenges with Microsoft solutions. Her goal is to inspire and educate others on the possibilities and benefits of Workplace and Productivity innovation.
Microsoft Tech Community – Latest Blogs –Read More
controlling Two F28379D simultaneously
Hi there,
In my project, I want to control two motors, one is the load motor and the other is MUT. I used two F28379D launchpads with two three-phase inverter. The first motor connected to the first inverter that controlled by first launchpad, and the other motor connected in the same way. The DC link, and axuiliuary supply are shared between them. The ground is connected between all of them. However, when I control the first motor (load), the ADCs for the other launchpad get lots of noises. I am not sure, whether, all these two launchpad should be supplied individualy or they have to communicate with each other. Could you please help in this matter.Hi there,
In my project, I want to control two motors, one is the load motor and the other is MUT. I used two F28379D launchpads with two three-phase inverter. The first motor connected to the first inverter that controlled by first launchpad, and the other motor connected in the same way. The DC link, and axuiliuary supply are shared between them. The ground is connected between all of them. However, when I control the first motor (load), the ADCs for the other launchpad get lots of noises. I am not sure, whether, all these two launchpad should be supplied individualy or they have to communicate with each other. Could you please help in this matter. Hi there,
In my project, I want to control two motors, one is the load motor and the other is MUT. I used two F28379D launchpads with two three-phase inverter. The first motor connected to the first inverter that controlled by first launchpad, and the other motor connected in the same way. The DC link, and axuiliuary supply are shared between them. The ground is connected between all of them. However, when I control the first motor (load), the ADCs for the other launchpad get lots of noises. I am not sure, whether, all these two launchpad should be supplied individualy or they have to communicate with each other. Could you please help in this matter. launchpad f28379d MATLAB Answers — New Questions
Sudden “Incorrect number or types of inputs or outputs for function invoke” error that I now have.
I have some code that previously worked for me with MATLAB and FRED, but is now throwing me an error. Simply put, I’m trying to open a file in FRED using a MATLAB script. This code had worked before the CrowdStrike incident, but after the incident I needed to reset my computer and reinstall all applications, and I’m not sure if having a different version of MATLAB is throwing an error for this. For reference, I’m using MATLAB version R2024a and FRED version 22.40. I tried contacting FRED support about this, but they had no idea how to fix it and suggested I look for MATLAB support.
I’m running the following lines of code:
system(‘cd c:Program FilesPhoton EngineeringFRED 22.40.4Bin & start fred.exe’); pause(1)
fredsvr = actxserver(‘FRED.Application’); pause(0.1)
freddoclist = get(fredsvr,’Documents’); pause(0.1)
set(fredsvr,’Visible’, 1);
testdoc = ‘C:UsersUSERNAMEDocumentsTESTDOCUMENT.frd’;
freddoc = invoke(freddoclist,’Open’,testdoc); pause(0.5);
When I get to the last line I get a message telling me that there is an "Incorrect number or types of inputs or outputs for function invoke", which is an error I didn’t previously see when I ran the code and it worked (a few months ago). It seems like there is an issue with the type for freddoclist, because in the workspace it’s telling me that the type is "1×1 �", which seems wrong. I’m not sure why it’s suddenly creating this type, but I’m not sure how to fix it. Any ideas? Could I be missing a MATLAB application?I have some code that previously worked for me with MATLAB and FRED, but is now throwing me an error. Simply put, I’m trying to open a file in FRED using a MATLAB script. This code had worked before the CrowdStrike incident, but after the incident I needed to reset my computer and reinstall all applications, and I’m not sure if having a different version of MATLAB is throwing an error for this. For reference, I’m using MATLAB version R2024a and FRED version 22.40. I tried contacting FRED support about this, but they had no idea how to fix it and suggested I look for MATLAB support.
I’m running the following lines of code:
system(‘cd c:Program FilesPhoton EngineeringFRED 22.40.4Bin & start fred.exe’); pause(1)
fredsvr = actxserver(‘FRED.Application’); pause(0.1)
freddoclist = get(fredsvr,’Documents’); pause(0.1)
set(fredsvr,’Visible’, 1);
testdoc = ‘C:UsersUSERNAMEDocumentsTESTDOCUMENT.frd’;
freddoc = invoke(freddoclist,’Open’,testdoc); pause(0.5);
When I get to the last line I get a message telling me that there is an "Incorrect number or types of inputs or outputs for function invoke", which is an error I didn’t previously see when I ran the code and it worked (a few months ago). It seems like there is an issue with the type for freddoclist, because in the workspace it’s telling me that the type is "1×1 �", which seems wrong. I’m not sure why it’s suddenly creating this type, but I’m not sure how to fix it. Any ideas? Could I be missing a MATLAB application? I have some code that previously worked for me with MATLAB and FRED, but is now throwing me an error. Simply put, I’m trying to open a file in FRED using a MATLAB script. This code had worked before the CrowdStrike incident, but after the incident I needed to reset my computer and reinstall all applications, and I’m not sure if having a different version of MATLAB is throwing an error for this. For reference, I’m using MATLAB version R2024a and FRED version 22.40. I tried contacting FRED support about this, but they had no idea how to fix it and suggested I look for MATLAB support.
I’m running the following lines of code:
system(‘cd c:Program FilesPhoton EngineeringFRED 22.40.4Bin & start fred.exe’); pause(1)
fredsvr = actxserver(‘FRED.Application’); pause(0.1)
freddoclist = get(fredsvr,’Documents’); pause(0.1)
set(fredsvr,’Visible’, 1);
testdoc = ‘C:UsersUSERNAMEDocumentsTESTDOCUMENT.frd’;
freddoc = invoke(freddoclist,’Open’,testdoc); pause(0.5);
When I get to the last line I get a message telling me that there is an "Incorrect number or types of inputs or outputs for function invoke", which is an error I didn’t previously see when I ran the code and it worked (a few months ago). It seems like there is an issue with the type for freddoclist, because in the workspace it’s telling me that the type is "1×1 �", which seems wrong. I’m not sure why it’s suddenly creating this type, but I’m not sure how to fix it. Any ideas? Could I be missing a MATLAB application? fred, error, invoke, type MATLAB Answers — New Questions
Filtered Historical Simulation Example Determination of Variance
I have a question on determining the variance in the Econometrics Toolbox example "Using Bootstrapping and Filtered Historical Simulation to Evaluate Market Risk". Everything is pretty straitforward till the end of the example where they use the filter function to determine the portfolio returns. Here is the command.
portfolioReturns = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
I would like to obtain the conditional variance paths. Hence I used the command
[V, Y] = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
Here is a plot of the first 10 horizon results for V
Here is the plot of the firt 10 horizon results for Y
The results for Y look like one would expect – plus and minus values around zero. However, the results for V should all be positive since according to the filter function for conditional variances V is the filtered variance result. When comparing the value of V to that of the variable portfolioReturns I fould them to be identical.
I can always square the value of the portfolioReturns to get the squarded returns. Is this the value of the conditional variance for the FHS simulation?I have a question on determining the variance in the Econometrics Toolbox example "Using Bootstrapping and Filtered Historical Simulation to Evaluate Market Risk". Everything is pretty straitforward till the end of the example where they use the filter function to determine the portfolio returns. Here is the command.
portfolioReturns = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
I would like to obtain the conditional variance paths. Hence I used the command
[V, Y] = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
Here is a plot of the first 10 horizon results for V
Here is the plot of the firt 10 horizon results for Y
The results for Y look like one would expect – plus and minus values around zero. However, the results for V should all be positive since according to the filter function for conditional variances V is the filtered variance result. When comparing the value of V to that of the variable portfolioReturns I fould them to be identical.
I can always square the value of the portfolioReturns to get the squarded returns. Is this the value of the conditional variance for the FHS simulation? I have a question on determining the variance in the Econometrics Toolbox example "Using Bootstrapping and Filtered Historical Simulation to Evaluate Market Risk". Everything is pretty straitforward till the end of the example where they use the filter function to determine the portfolio returns. Here is the command.
portfolioReturns = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
I would like to obtain the conditional variance paths. Hence I used the command
[V, Y] = filter(fit, bootstrappedResiduals, …
‘Y0’, Y0, ‘Z0’, Z0, ‘V0’, V0);
Here is a plot of the first 10 horizon results for V
Here is the plot of the firt 10 horizon results for Y
The results for Y look like one would expect – plus and minus values around zero. However, the results for V should all be positive since according to the filter function for conditional variances V is the filtered variance result. When comparing the value of V to that of the variable portfolioReturns I fould them to be identical.
I can always square the value of the portfolioReturns to get the squarded returns. Is this the value of the conditional variance for the FHS simulation? conditional variance, filtered historical simulation MATLAB Answers — New Questions
limit elevation angles for a phased array on a platform/aircraft to satellite
I am able to run and modify this example: Aircraft-to-Satellite Communication for ADS-B Out – MATLAB & Simulink (mathworks.com)
except I do not know how to limit the elevation angles to the satellite such as can be done with a ground station very easily as in the ground station code below:
% Create a satellite scenario with AutoSimulate set to false
sc = satelliteScenario(‘AutoSimulate’, false);
% Define the satellite with a higher orbit
semiMajorAxis = 7000e3; % Semi-major axis in meters (higher orbit)
eccentricity = 0.001; % Eccentricity
inclination = 90; % Inclination in degrees (adjusted for closer pass)
rightAscension = 164.5; % 105 Right ascension of the ascending node in degrees (adjusted for longitude)
argumentOfPeriapsis = 0; % Argument of periapsis in degrees
trueAnomaly = 0; % True anomaly in degrees
sat = satellite(sc, semiMajorAxis, eccentricity, inclination, …
rightAscension, argumentOfPeriapsis, trueAnomaly);
% Set the elevation range
minElevation = 37; % Minimum elevation in degrees
maxElevation = 90; % Maximum elevation in degrees
% Define the ground station (representing the aircraft)
gs = groundStation(sc, ‘Name’, ‘MyAircraft’, ‘Latitude’, 40.0150,…
‘Longitude’, -105.2705, ‘Altitude’, 10000, ‘MinElevationAngle’,minElevation);
It is a built in property of the grounStation function. How can I use the above example, but limit the elevation angles like in the groundStation property?
This is some of the aircraft code:
waypoints = [… % Latitude (deg), Longitude (deg), Altitude (meters)
40.6289,-73.7738,3;…
40.6325,-73.7819,3;…
40.6341,-73.7852,44;…
40.6400,-73.7974,265;…
40.6171,-73.8618,1012;…
40.5787,-73.8585,1698;…
39.1452,-71.6083,11270;…
34.2281,-66.0839,11264;…
32.4248,-64.4389,970;…
32.3476,-64.4565,574;…
32.3320,-64.4915,452;…
32.3459,-64.5712,453;…
32.3610,-64.6612,18;…
32.3621,-64.6678,3;…
32.3639,-64.6777,3];
timeOfArrival = duration([… % time (HH:mm:ss)
"00:00:00";…
"00:00:20";…
"00:00:27";…
"00:00:43";…
"00:01:47";…
"00:02:21";…
"00:21:25";…
"01:32:39";…
"01:54:27";…
"01:55:47";…
"01:56:27";…
"01:57:48";…
"01:59:49";…
"01:59:55";…
"02:00:15"]);
trajectory = geoTrajectory(waypoints,seconds(timeOfArrival),’SampleRate’,sampleTime,…
‘SamplesPerFrame’, 10, AutoPitch=true,AutoBank=true);
minElevationAngle = 30; % degrees
aircraft = platform(sc,trajectory, Name="Aircraft", Visual3DModel="airplane.glb");
The limits I want to impose on the simulation and viewing are from elevation = 90 to 30.
Thank you for the helpI am able to run and modify this example: Aircraft-to-Satellite Communication for ADS-B Out – MATLAB & Simulink (mathworks.com)
except I do not know how to limit the elevation angles to the satellite such as can be done with a ground station very easily as in the ground station code below:
% Create a satellite scenario with AutoSimulate set to false
sc = satelliteScenario(‘AutoSimulate’, false);
% Define the satellite with a higher orbit
semiMajorAxis = 7000e3; % Semi-major axis in meters (higher orbit)
eccentricity = 0.001; % Eccentricity
inclination = 90; % Inclination in degrees (adjusted for closer pass)
rightAscension = 164.5; % 105 Right ascension of the ascending node in degrees (adjusted for longitude)
argumentOfPeriapsis = 0; % Argument of periapsis in degrees
trueAnomaly = 0; % True anomaly in degrees
sat = satellite(sc, semiMajorAxis, eccentricity, inclination, …
rightAscension, argumentOfPeriapsis, trueAnomaly);
% Set the elevation range
minElevation = 37; % Minimum elevation in degrees
maxElevation = 90; % Maximum elevation in degrees
% Define the ground station (representing the aircraft)
gs = groundStation(sc, ‘Name’, ‘MyAircraft’, ‘Latitude’, 40.0150,…
‘Longitude’, -105.2705, ‘Altitude’, 10000, ‘MinElevationAngle’,minElevation);
It is a built in property of the grounStation function. How can I use the above example, but limit the elevation angles like in the groundStation property?
This is some of the aircraft code:
waypoints = [… % Latitude (deg), Longitude (deg), Altitude (meters)
40.6289,-73.7738,3;…
40.6325,-73.7819,3;…
40.6341,-73.7852,44;…
40.6400,-73.7974,265;…
40.6171,-73.8618,1012;…
40.5787,-73.8585,1698;…
39.1452,-71.6083,11270;…
34.2281,-66.0839,11264;…
32.4248,-64.4389,970;…
32.3476,-64.4565,574;…
32.3320,-64.4915,452;…
32.3459,-64.5712,453;…
32.3610,-64.6612,18;…
32.3621,-64.6678,3;…
32.3639,-64.6777,3];
timeOfArrival = duration([… % time (HH:mm:ss)
"00:00:00";…
"00:00:20";…
"00:00:27";…
"00:00:43";…
"00:01:47";…
"00:02:21";…
"00:21:25";…
"01:32:39";…
"01:54:27";…
"01:55:47";…
"01:56:27";…
"01:57:48";…
"01:59:49";…
"01:59:55";…
"02:00:15"]);
trajectory = geoTrajectory(waypoints,seconds(timeOfArrival),’SampleRate’,sampleTime,…
‘SamplesPerFrame’, 10, AutoPitch=true,AutoBank=true);
minElevationAngle = 30; % degrees
aircraft = platform(sc,trajectory, Name="Aircraft", Visual3DModel="airplane.glb");
The limits I want to impose on the simulation and viewing are from elevation = 90 to 30.
Thank you for the help I am able to run and modify this example: Aircraft-to-Satellite Communication for ADS-B Out – MATLAB & Simulink (mathworks.com)
except I do not know how to limit the elevation angles to the satellite such as can be done with a ground station very easily as in the ground station code below:
% Create a satellite scenario with AutoSimulate set to false
sc = satelliteScenario(‘AutoSimulate’, false);
% Define the satellite with a higher orbit
semiMajorAxis = 7000e3; % Semi-major axis in meters (higher orbit)
eccentricity = 0.001; % Eccentricity
inclination = 90; % Inclination in degrees (adjusted for closer pass)
rightAscension = 164.5; % 105 Right ascension of the ascending node in degrees (adjusted for longitude)
argumentOfPeriapsis = 0; % Argument of periapsis in degrees
trueAnomaly = 0; % True anomaly in degrees
sat = satellite(sc, semiMajorAxis, eccentricity, inclination, …
rightAscension, argumentOfPeriapsis, trueAnomaly);
% Set the elevation range
minElevation = 37; % Minimum elevation in degrees
maxElevation = 90; % Maximum elevation in degrees
% Define the ground station (representing the aircraft)
gs = groundStation(sc, ‘Name’, ‘MyAircraft’, ‘Latitude’, 40.0150,…
‘Longitude’, -105.2705, ‘Altitude’, 10000, ‘MinElevationAngle’,minElevation);
It is a built in property of the grounStation function. How can I use the above example, but limit the elevation angles like in the groundStation property?
This is some of the aircraft code:
waypoints = [… % Latitude (deg), Longitude (deg), Altitude (meters)
40.6289,-73.7738,3;…
40.6325,-73.7819,3;…
40.6341,-73.7852,44;…
40.6400,-73.7974,265;…
40.6171,-73.8618,1012;…
40.5787,-73.8585,1698;…
39.1452,-71.6083,11270;…
34.2281,-66.0839,11264;…
32.4248,-64.4389,970;…
32.3476,-64.4565,574;…
32.3320,-64.4915,452;…
32.3459,-64.5712,453;…
32.3610,-64.6612,18;…
32.3621,-64.6678,3;…
32.3639,-64.6777,3];
timeOfArrival = duration([… % time (HH:mm:ss)
"00:00:00";…
"00:00:20";…
"00:00:27";…
"00:00:43";…
"00:01:47";…
"00:02:21";…
"00:21:25";…
"01:32:39";…
"01:54:27";…
"01:55:47";…
"01:56:27";…
"01:57:48";…
"01:59:49";…
"01:59:55";…
"02:00:15"]);
trajectory = geoTrajectory(waypoints,seconds(timeOfArrival),’SampleRate’,sampleTime,…
‘SamplesPerFrame’, 10, AutoPitch=true,AutoBank=true);
minElevationAngle = 30; % degrees
aircraft = platform(sc,trajectory, Name="Aircraft", Visual3DModel="airplane.glb");
The limits I want to impose on the simulation and viewing are from elevation = 90 to 30.
Thank you for the help aircraft, satellite, tracking, matlab MATLAB Answers — New Questions
How to hide console messages for a matlab executable in batch mode
I have used application compiler in Matlab2023b to pack two executable applications (e.g. App1.exe, App2.exe).
I also chose the setting "Do not display the Windows Command Shell (console) for execution" in my project file for both of them.
App2.exe will be called inside one of the function in App1.exe, which is hard coded.
I want to run App1.exe in a batch mode and i also wrote a batch file with one command to run it.
In the terminal the console messages of App1.exe don’t show up (which is expected), but the console messages of App2.exe still exist.
Actually i also tried in the batch file like "App1.exe >nul 2>&1" which makes the situation even worse, console messages of App1.exe also shou up after this adjustment. And i also already use the @echo off, but i think it was not the solution.
How can i also hide the console messages of the App2.exe?
btw. this problem didn’t show in Matlab202b before.I have used application compiler in Matlab2023b to pack two executable applications (e.g. App1.exe, App2.exe).
I also chose the setting "Do not display the Windows Command Shell (console) for execution" in my project file for both of them.
App2.exe will be called inside one of the function in App1.exe, which is hard coded.
I want to run App1.exe in a batch mode and i also wrote a batch file with one command to run it.
In the terminal the console messages of App1.exe don’t show up (which is expected), but the console messages of App2.exe still exist.
Actually i also tried in the batch file like "App1.exe >nul 2>&1" which makes the situation even worse, console messages of App1.exe also shou up after this adjustment. And i also already use the @echo off, but i think it was not the solution.
How can i also hide the console messages of the App2.exe?
btw. this problem didn’t show in Matlab202b before. I have used application compiler in Matlab2023b to pack two executable applications (e.g. App1.exe, App2.exe).
I also chose the setting "Do not display the Windows Command Shell (console) for execution" in my project file for both of them.
App2.exe will be called inside one of the function in App1.exe, which is hard coded.
I want to run App1.exe in a batch mode and i also wrote a batch file with one command to run it.
In the terminal the console messages of App1.exe don’t show up (which is expected), but the console messages of App2.exe still exist.
Actually i also tried in the batch file like "App1.exe >nul 2>&1" which makes the situation even worse, console messages of App1.exe also shou up after this adjustment. And i also already use the @echo off, but i think it was not the solution.
How can i also hide the console messages of the App2.exe?
btw. this problem didn’t show in Matlab202b before. batch, executable MATLAB Answers — New Questions
How to KQL query *live* EmailEvents table and NOT the streaming API
EmailEvents table in the advanced hunting schema – Microsoft Defender XDR | Microsoft Learn – this page tells us:
Note
* The LatestDeliveryLocation and LatestDeliveryAction columns are not available in the Streaming API.
I’ve found that a lot of my queries come back with blank LatestDeliveryLocation. This means I’m searching via the streaming API. But I don’t want to do that, I want to search the live EmailEvents table and even want to filter based on LatestDeliveryLocation. I am working in Defender portal, within the Advanced Hunting section. Example query:
// Works (time range set in UI dropdown):
EmailEvents
| where LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and DeliveryLocation =~ ‘Inbox/folder’
// Does NOT work:
EmailEvents
| where TimeGenerated >= ago(1d)
| where LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and DeliveryLocation =~ ‘Inbox/folder’
So it seems as though if your query sets the time range, you’re searching the streaming API. Can anyone please confirm I have this understood correctly? My next question would be, can I add something else to my query to ensure I’ll be searching the live table?
Microsoft 365 Defender Streaming API: Identity and CloudApp Events in General Availability – Microsoft Community Hub – I asked this in the comments over there too.
EmailEvents table in the advanced hunting schema – Microsoft Defender XDR | Microsoft Learn – this page tells us:Note* The LatestDeliveryLocation and LatestDeliveryAction columns are not available in the Streaming API. I’ve found that a lot of my queries come back with blank LatestDeliveryLocation. This means I’m searching via the streaming API. But I don’t want to do that, I want to search the live EmailEvents table and even want to filter based on LatestDeliveryLocation. I am working in Defender portal, within the Advanced Hunting section. Example query: // Works (time range set in UI dropdown):
EmailEvents
| where LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and DeliveryLocation =~ ‘Inbox/folder’
// Does NOT work:
EmailEvents
| where TimeGenerated >= ago(1d)
| where LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and DeliveryLocation =~ ‘Inbox/folder’ So it seems as though if your query sets the time range, you’re searching the streaming API. Can anyone please confirm I have this understood correctly? My next question would be, can I add something else to my query to ensure I’ll be searching the live table? Microsoft 365 Defender Streaming API: Identity and CloudApp Events in General Availability – Microsoft Community Hub – I asked this in the comments over there too. Read More
Bing Maps “Birds eye View” Alternative
Hi Community,
I need to batch download high-resolution images of specific locations based on latitude and longitude. Previously, I used Bing Maps Bird’s Eye View, which provided the detail and perspective needed to identify objects like 40-yard dumpsters. With the Bing Maps API now discontinued for new users, I’m looking into Azure Maps.
My requirements:
High clarity to accurately identify objects with the zoom level that I can set.Bird’s Eye View (like Bing Maps) to capture depth and dimensions.
Can you suggest which Azure Maps services would best match these needs? Any guidance would be helpful.
Thanks!
Hi Community,I need to batch download high-resolution images of specific locations based on latitude and longitude. Previously, I used Bing Maps Bird’s Eye View, which provided the detail and perspective needed to identify objects like 40-yard dumpsters. With the Bing Maps API now discontinued for new users, I’m looking into Azure Maps.My requirements:High clarity to accurately identify objects with the zoom level that I can set.Bird’s Eye View (like Bing Maps) to capture depth and dimensions.Can you suggest which Azure Maps services would best match these needs? Any guidance would be helpful.Thanks! Read More
KQL Query problem with two double quotes
Hello,
We have a Sharepoint Highlighted Contents webpart with a KQL query like this:
RefinableString110:”True”
The webpart worked fine until a couple of weeks ago, then all of a sudden it didn’t return any result. When looking at the issue, we could find a workaround. We added a double quote at the end of the search query and this fixed the issue:
RefinableString110:True”
which was quite strange.
We checked with the “Search Query Tool” and we got the expected results: the first query worked and the second query returned HTTP/1.1 500 Internal Server Error, so it’s not an issue with the Microsoft service.
We tested with all combinations of quotes, capitals, etc. The only think that worked was putting one double quote at the end of the search query.
Could you please help?
Screenshots show the issue.
Thanks.
Regards,
Hello,We have a Sharepoint Highlighted Contents webpart with a KQL query like this:RefinableString110:”True”The webpart worked fine until a couple of weeks ago, then all of a sudden it didn’t return any result. When looking at the issue, we could find a workaround. We added a double quote at the end of the search query and this fixed the issue:RefinableString110:True” which was quite strange.We checked with the “Search Query Tool” and we got the expected results: the first query worked and the second query returned HTTP/1.1 500 Internal Server Error, so it’s not an issue with the Microsoft service.We tested with all combinations of quotes, capitals, etc. The only think that worked was putting one double quote at the end of the search query.Could you please help?Screenshots show the issue.Thanks.Regards, Read More