Author: PuTI
Partner Blog | Empowering innovation: Microsoft partners unite for transformative success
Our guest contributor for this blog is Regina Johnson, BPGI Global Strategy Director, with contributions from Raamel Mitchell, BPGI Global Director of Business Development.
Collaborative success in the Microsoft partner ecosystem
In a world where technology shapes our everyday lives, entrepreneurs in the Microsoft AI Cloud Partner Program exemplify innovation, leveraging Microsoft technology to deliver outstanding customer solutions and services through partnership. The Microsoft partner ecosystem is a testament to the transformative power of collaboration as a proven pathway to success. The FY23 Catalyst Accelerator is a prime example, with a cohort model developed with the Black Channel Partner Alliance (BCPA), the Microsoft Black Partner Growth Initiative (BPGI), and AppMeetup. This cohort brought together partners excelling in several industries and building on the Microsoft industry clouds. Their expertise in leveraging Microsoft technologies has led to advancements in their respective fields As they continue their journey, these partners are part of a larger global partner ecosystem, working alongside thousands of partners who are all pushing the boundaries of what’s possible in the tech world.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Pinterest Druid Holiday Load Testing
By Isabel Tallam | Senior Software Engineer; Jian Wang | Senior Software Engineer; Jiaqi Gu| Senior Software Engineer; Yi Yang | Senior Software Engineer; and Kapil Bajaj | Engineering Manager, Real-time Analytics team
Like many companies, Pinterest sees an increase in traffic in the last three months of the year. We need to make sure our systems are ready for this increase in traffic so we don’t run into any unexpected problems. This is especially important as Pinners come to Pinterest at this time for holiday planning and shopping. Therefore, we do a yearly exercise of testing our systems with additional load. During this time, we verify that our systems are able to handle the expected traffic increase. On Druid we look at several checks to verify:
Queries: We make sure the service is able to handle the expected increase in QPS while at the same time supporting the P99 Latency SLA our clients need.
Ingestion: We verify that the real-time ingestion is able to handle the increase in data.
Increase in Data size: We confirm that the storage system has sufficient capacity to handle the increased data volume.
In this post, we’ll provide details about how we run the holiday load test and verify Druid is able to handle the expected increases mentioned above.
Pinterest traffic increases as users look for inspiration for holidays.
How We Run Load Tests
As mentioned above, the areas our teams focus on are:
Can the system handle increased query traffic?
Can the system handle the increase in data ingestion?
Can the system handle the increase in data volume?
Can the System Handle Increased Query Traffic?
Testing query traffic and SLA is a main goal during holiday load testing. We have two different options for load testing in our Druid system. The first option generates queries based on the current data set in the Druid data and then runs these queries in Druid. The other option captures real production queries and re-runs these queries in Druid. Both of these options have their advantages and disadvantages.
Sample Versus Production Queries
The first option — using generated queries — is fairly simple to run anytime and does not require preparation like capturing queries. However, this type of testing may not accurately show how the system will behave in production scenarios. A real production query may look different and touch different data, query types, and timeframes than what is tested using generated queries. Additionally, any corner cases would be ignored in this type of testing.
The second option has the advantage of having real production queries that would be very similar to what we expect to see during any future traffic. The disadvantage here, however, is that setting up the tests is more involved, as production queries need to be captured and potentially need to be updated to match the new timeline when holiday testing is performed. In Druid, running the same query today versus one week from today may give different latency results, as data will move through different host stages in which data is supported by faster high-memory hosts in the first days/weeks versus slower disk stages for older data.
We decided to move ahead with real production queries because one of our priorities was to replicate production use cases as closely as possible. We made use of a Druid native feature that automatically logs any query that is being sent to a Druid broker host (broker hosts handle all the query work in a Druid cluster).
Test Environment Setup
Holiday testing is not done in the production environment, as this could adversely impact the production traffic. However, the test needs an environment setup as similar to the production environment as possible. Therefore, we created a copy of the production environment that is short-lived and solely used for testing. To test query traffic, the only stages required are brokers, historical stages, and coordinators. We have several tiers of historical stages in the production environment and we replicated the same setup in the test environment as well. We also made sure to use the same host machine types, configurations, pool size, etc.
The data we used for testing was copied over from production. We used a simple MySQL dump to create a copy of all the segments stored in the production environment. Once the dump is added to the MySQL instance in the test environment, the coordinator will automatically trigger the data to be replicated in the historical stages of the test environment.
Before initiating the copy, however, we needed to identify what data is required. This will depend on the client team and on the timeframe their queries request. In some cases, it may not be necessary to copy all data, but only the most recent days, weeks, or months.
Test environment is set up with the same configuration and hosts as Prod environment.
Our test system first connects to the broker hosts on the test environment, then loads the queries from the log file and sends them to the broker hosts. We use a multi-threaded implementation to increase the QPS being sent to the broker nodes. First, we run tests to identify how many threads are needed as a baseline that matches production traffic — for example, 300 QPS. Based on that, we can define how many threads we need to use for testing expected holiday traffic (two, three, or more times the standard traffic).
In our use case, we had loaded the data received up to a specific date (e.g. October 1st). At this point, we were re-running the captured log files on the same date or the day before, to match production behavior. Our test script also was able to update the time frame in a query to match either the current time or a predefined time to allow running any log file and translating it to the data available on the test environment.
Evaluating the Results
To determine the health of our system, we used our existing metrics to compare QPS and P99 latency on brokers and historical nodes, as well as determining system health via indicators like CPU usage of the brokers. These metrics help us identify any bottlenecks.
Query response time with normal traffic and 2x increase on basic system setup.
Typical bottlenecks can include the historical nodes or the broker nodes.
The historical nodes may show a higher latency for increased QPS, which will in turn increase the overall latency. To resolve this, we would add mirror hosts and increase the number of replicas of the data to support better latency under higher load. This step is something that will take time to implement, as hosts need to be added and data needs to be loaded, which can take several hours depending on the data size. Therefore, this is something that should be completed before traffic increases on the production system.
If the broker nodes are no longer able to handle the incoming query traffic, the size of the broker pool needs to be increased. If this is seen in the test environment, or even the production environment, it is much faster to increase the pool size and can potentially be done ad-hoc as well.
Testing with an increased data size on the test environment helps us determine which steps are needed to support the expected holiday traffic changes. We can make these configuration changes in advance, and we can make the support team aware of changes and of the maximum traffic the system is able to handle within the specified SLA (QPS and P99 latency requirements from the client teams).
Can the system handle the increase in data ingestion?
Testing the capacity for real-time data ingestion is similar to testing query performance. It is possible to start with making an estimate of the supported ingestion rate based on the dimensions/cardinality of the ingested data. However, this is only a guideline, and for some high-priority use cases it is a good idea to test early on.
We set up a test environment that has the same capacity, configuration, etc. as the production environment. However, in this step, some help from client teams may be required as we also need to test with increased data from the ingestion source like Kafka topic.
When reviewing the ingestion test, we focused on several key metrics. The ingestion lag should be low, and the number of both successful and rejected events (due to rejection window exceeded) should be closely similar to comparable values in the production environment. We also include validation of ingested data and general system health of overlord and middle manager stages — the stages handling ingestion of real time data.
Sample metrics for successfully ingested events, rejected events and kafka ingestion lag.
Sample metrics for successfully ingested events, rejected events and kafka ingestion lag.
Sample metrics for successfully ingested events, rejected events and kafka ingestion lag.
Can the system handle the increase in data volume?
Evaluating if the system can handle the increase in data volume is probably the simplest and quickest check, though just as important as the previous steps. For this, we take a look at the coordinator UI: here we can see all historical stages, the pool size, and at what capacity they are currently running. Once clients provide details on the expected increase in data volume, it is a fairly simple process to calculate the amount of additional data that needs to be stored over the holiday period and potentially some period after that.
The space is at a healthy percentage (~70%) allowing for some growth.
Results
In the tests we ran this year, we found that our historical stages are in a very good state and are able to handle the additional traffic expected during the holiday time. We did see, however, that the broker pool may need some additional hosts if traffic meets a certain threshold. We have been sure to keep this communication visible with the client teams and support teams so team members are aware and know that the pool size may need to be increased.
Learnings
Timing is very critical with holiday testing. This project has a fixed end date by which all changes need to be completed in the systems before any traffic increases, and the teams need to make sure to have all the pieces in place before results are due. As is true of many projects as well as this one, we need to leave additional buffer time for unexpected changes in timeline and requirements.
Druid is a backend service, which is not always top of mind for many client teams as long as it is performing well. Therefore, is it a good idea to reach out to client teams before testing starts to get their estimation of expected Holiday traffic increases. Some of our clients reached out to us on their own; however, the due date for any capacity increase requests to governance teams would have already passed. In these cases, or where client teams are not sure yet, it is a good practice to make a general estimation on traffic increase and start testing with those numbers.
Keeping track of holiday planning and applied changes for each year is also a good practice. Having a history of changes every year and keeping track of the actual increase versus the original estimates made beforehand will help to make educated estimates on what traffic increases may be expected in the following year.
Knowing the details on the capacity of brokers and historical stages before the holiday updates makes it easier for teams to evaluate what capacities to reduce the clusters after the holidays as well as considering organic growth on a per-month basis.
Future Work
In this year’s use case, we chose the option of capturing broker logs to retrieve the queries we wanted to re-play back to Druid. This option worked for us at this time, though we are planning to look into other options for capturing queries going forward. The log files option works well for a one-off need, but it would be useful to have continuous logging of queries and storing these in Druid. This can help with debugging issues and identifying high-latency queries that may need some tweaking to get performance improvements.
StackShare | Tech stack deep dives from top startups and engineering teams
Cost Reduction in Goku
By Monil Mukesh Sanghavi | Software Engineer, Real Time Analytics Team; Rui Zhang | Software Engineer, Real Time Analytics Team; Hao Jiang | Software Engineer, Real Time Analytics Team; Miao Wang | Software Engineer, Real Time Analytics Team;
In 2018, we launched Goku, a scalable and high performant time series database system, which served as the storage and query serving engine for short term metrics (less than one day old). In early 2020, we launched GokuL (Goku long term), which extended Goku’s capability by supporting long term metrics data (i.e. data older than a day and up to a year). Both of these completely replaced OpenTSDB. For GokuL, we used 3 clusters of i3.4xlarge SSD backed EC2 instances which, over time, we realized are very costly. Reducing this cost was one of our primary aims going into 2021. This blog post will cover the approach we took to achieve our ambition.
Background
We use a tiered approach to segregate the long term data and store it in the form of buckets.
Table 1: table of a tiered approach
Tiers 1–5 contain the data stored on the GokuL (long term) clusters. GokuL uses RocksDB to store its long term data, and the data is ingested in the form of SST files.
Query Analysis
We analyzed the queries going to the long term cluster and observed the following:
There are very few metrics (approximately ~6K) out of a total of 10B for which data points older than three months were queried from GokuL.
More than half of the GokuL queries had specified rollup intervals of one day or more.
Tier 5 Data Analysis
We randomly selected a few shards in GokuL and analyzed the data. We observed the memory consumption of tier 5 data was much more than all the other tiers (1–4) combined. This was despite the fact that tier 5 contains only one hour of rolled up data, whereas the other tiers contained a mix of raw and 15 minute rolled up data.
Table 2: SST File size for each bucket in MiB
Solutions
It was inferred from the query and tier 5 analysis that tier 5 data (which holds six buckets of 64 days of data each) was the least queried as well as the most disk consuming. We planned our solutions to target this tier as it would give us the most benefits. Mentioned below are some of the solutions which were discussed.
Namespace
Implementation of a functionality called namespace would store configurations like ttl, rollup interval, and tier configurations for a set of metrics following that namespace. Uber’s M3 also has a similar solution. This would help us set appropriate configurations for the select sete.g. set a lower ttl for metrics that do not require longer retention, etc). The time to production for this project was longer, and hence we decided to make this a separate project in the future. This is a project being actively worked upon.
Rollup Interval Adjust for Tier 5 Data
We experimented with changing the rollup interval of tier 5 data from one hour to one day and observed the change in the final SST file(s) size for the tier 5 bucket.
Table 3
The savings that came out of this solution were not strong enough to support putting this into production.
On Demand Loading of Tier 5 Data
GokuL clusters would only store data from tiers 1–4 on startup and would load the tier 5 buckets as necessary (based on queries). The cons of this solution were:
Users would have to wait and retry the query once the corresponding tier 5 bucket from s3 had been ingested by the GokuL host.
Once ingested, the bucket would remain in GokuL unless thrown away by an eviction algorithm.
We decided not to go with this solution because it was not user friendly.
Tiered Storage
We decided to move tier 5 data into a separate HDD based cluster. While there was some notable difference observed in the query latency, it could be ignored because the number of queries hitting this tier was much less. We calculated that tier 5 was consuming approximately 1 TB of each of the 650 hosts in the GokuL cluster. We decided to use the d2.2xlarge instance to store and serve the tier 5 data in GokuL.
Table 4
The cost savings that came out of this solution were huge. We replaced around 325 i3.4xlarge instances with 111 d2.2xlarge instances, and the cost reduction was huge. We reduced nearly 30–35% of our costs with this change.
To support this, we had to design and implement tier-based routing in the goku root cluster, which routes the queries to short term and long term leaf clusters. This was one of the solutions that gave us a huge cost savings.
In the future, we can evaluate if we can reduce the number of replicas and compromise on availability in opposition to the low number of queries.
RocksDB Tuning
As mentioned above, GokuL uses RocksDB to store the long term data. We observed that the RocksDB options we were using were not optimal for Goku’s data that has high volume and low QPS.
We experimented with using a stronger compression algorithm (ZSTD with level 5), and this reduced the disk usage by 40%. In addition to this, we enabled the partitioned index filter wherein only the top level index is loaded into memory. On top of this, we enabled caching with higher priority for filter and index blocks so that they use the same cache as the data blocks and also minimize the performance impact.
With both the above changes, we noticed that the latency difference was not large and the reduction in data space usage was approximately 50%. We immediately put this into production and shrunk the size and cost of our GokuL clusters by another half.
What’s Next
Namespace
As mentioned, we are actively working on the implementation of the namespace feature, which will help us reduce the long term cluster costs even further by reducing the ttl for most of the current metrics that do not need the high retention anyways.
Acknowledgments
Huge thanks to Brian Overstreet, Wei Zhu, and the observability team for providing and supporting solutions on the table.
StackShare | Tech stack deep dives from top startups and engineering teams
Microsoft Excel—the workhorse for so many jobs
Microsoft Excel is one of the core apps in the Microsoft 365 suite and is quite familiar to many small and medium business (SMB) users. But are you getting the most out of Excel?
Excel can do much more than create and edit spreadsheets
Customers who explore Excel’s features a bit more find that they can apply it to many tasks. You can quickly learn to use it as a data analytics tool, and it’s also great for building quick custom dashboards. You can also design a custom template so that your work is always on-brand. Excel can do a lot, by being a little inventive in how you use it.
Bandido Solutions uses Excel to talk to customers
Jimmy Davidson, owner of Bandido Solutions and Bandido Woodworks uses Microsoft 365 to support his business. He’s found that Microsoft Teams is key for working across three locations and Microsoft OneNote helps his team make sure that punch lists are continuously managed and up-to date, but most of all, he looks to Excel to run across his business and directly communicate about complex projects with his customers.
“If I had to pick one app in the Microsoft 365 Business suite, it would be Excel,” Davidson says. “It’s the root of our organization.”
Davidson stands out because he not only uses Excel for its core spreadsheet features, he also uses it as a communication channel with his customers. And he especially values how Excel allows his work to be transparent to his customers. By sending breakdowns that list the tasks and timelines for a kitchen or bath remodel to customers in Excel, Davidson can show them how their estimates were built, line by line. Since many of his customers are already familiar with Excel, using it allows Davidson and his customers to discuss the work together and quickly clarify any concerns or adjust schedules. The openness of communication builds rapport and trust and helps move projects along more quickly.
By using Excel’s template capabilities, Davidson can design a reuseable document format that lets customers see their breakdowns visually, using dashboard graphics to make the project schedule easy to understand and act on. He can still include the source spreadsheets in supplementary tabs, but what the customer sees first will be an easy-to-consume summary of the project that they are looking to his company to do for them. That new kitchen or bath now seems like a reality.
Resources
Find the right Microsoft 365 business plan for your business.
Learn more about how to set up and use your Microsoft 365 subscription and find tips and templates to help you accomplish your business tasks.
Get free resources, tech training, and guidance to keep your business thriving and growing.
Partners can access training resources, customer decks and deployment checklists to do more with Microsoft 365.
Microsoft Tech Community – Latest Blogs –Read More
The Top 5 Healthcare Internet of Things (IoT) Vulnerabilities
The Internet of Things (IoT) is like a teenager, full of potential but still has some growing up to do. Just as the internet connects people, IoT connects our smart gadgets together. However, as with any fledgling technology, there are growing pains that can’t be ignored as connected devices become more integrated into Hospitals and our everyday lives. The following five IoT hacks demonstrate the current vulnerabilities in IoT and represent why Healthcare IT Professionals needs to make sure their IoT enabled Healthcare Devices are secured, protected and monitored.
This hack took place in October of 2016, and it still ranks as the largest DDoS attack ever launched. The attack targeted a DNS service provider Dyn, using a botnet of IoT devices. It managed to cripple Dyn servers and brought huge sections of the internet down. Media titans like Twitter, Reddit, CNN, and Netflix were affected. It was like the internet had a cold and everyone caught it. Hospitals nationwide were effected as well.
The botnet is named after the Mirai malware that it used to infect connected devices. Once it successfully infected a vulnerable IoT gadget, it automatically searched the internet for other vulnerable devices. Whenever it found one, the malware used the default name and password to login into the device, install itself, and repeat the process. It was like a game of dominoes, but with malware. Many of these devices had issues with outdated firmware or weak default passwords, which made them perpetually vulnerable and easy to hack. This attack demonstrates the importance of creating strong passwords and regular firmware updates. These updates often come with patches for current vulnerabilities, so you should never skip them. Creating strong, complex passwords for all your IoT devices is a must before adding them to your network. It’s like putting a lock on your diary, but for your devices.
IoT devices have tremendous potential in the field of medicine. However, the stakes are very high as far as security is concerned. This was starkly illustrated by an incident in 2017 when the FDA announced that they had discovered a serious vulnerability in implantable pacemakers. Anyone who has watched the Homeland will be familiar with this attack. It’s like a real-life episode of Black Mirror.
In this case, the vulnerability laid in the transmitter that pacemakers used to communicate with external services. These pacemakers relayed information about the patient’s conditions to their physicians, which made monitoring of each patient much easier. Once attackers gained access to pacemaker’s transmitter, they were able to alter its functioning, deplete the battery, and even administer potentially fatal shocks.
3. The Baby Heart Monitor Hack
As more IoT devices are making their way into our homes, privacy is becoming a huge concern. For example, The Owlet Baby heart monitor may seem absolutely harmless, but the lack of security is what makes it and similar devices extremely vulnerable to hacking. It’s like leaving your front door unlocked and expecting no one to come in. This is the same type of technology used in Health and Life Sciences organizations worldwide. All IP connected devices are potential attack vectors.
This is not an isolated case. In 2018 another IoT Device for Baby Monitoring was hacked, “Another baby monitor camera hacked | CSO Online.” These easy-to-hack baby monitors allowed them to target other smart devices on the same network. As it turns out, one unprotected device can make your entire home vulnerable and even your employer. It’s like a chain reaction, but with hackers.
Nothing is worse than feeling like you are being watched. Except maybe for actually being watched through your Webcam. TRENDnet marketed their SecurView cameras as being perfect for a wide range of uses. Not only they could serve as home security cameras but also double as baby monitors. Best of all, they were supposed to be secure, which is the main thing you want from the security camera. But as it turned out, anyone who was able to find the IP address of any of these devices could easily look through it. Even large scale IP Camera are at risks seen by “Hackers reportedly breach hospital surveillance cameras, exposing the security risks of connected devices | Fierce Healthcare”
In some cases, snoopers were also able to capture audio as well as video. It’s like having a peephole on your front door, but everyone can look through it.
5. The Vehicle Hack
Imagine an attack on an Ambulance needed immediately to save a life or even worse the same attack launched on multiple emergency vehicles at the same time. Cybersecurity Professionals must endeavor to protect their entire attack surface including their vehicle feet. This last attack to review was first demonstrated in July of 2015 by a team from IBM. They were able to access the onboard software of a Jeep SUV and exploit a vulnerability in the firmware update mechanism.
Researchers took total control of the vehicle and were able to speed it up and slow it down, as well as turn the wheel and cause the car to veer off the road. Scary stuff!! As more people begin to embrace electric vehicles and move towards driverless car technology, it is increasingly important that we make sure these vehicles are as secure as possible.
IoT promises to change our future, but at the same time, it poses severe security risks. Therefore, we should stay aware and learn how to protect our devices against cyber-attacks. High profile security lapses like those mentioned above only serve to reinforce the potential for disaster when security is neglected. Healthcare IT Professionals now more than ever give IoT based attacks the respect it deserves and put a program in place to mitigate these real-world risks.
Remember, how did the Hackers get away? They Ransomeware…Stay safe out there!
Microsoft Tech Community – Latest Blogs –Read More
Azure Integration Services year in review: An exciting, innovative 2023!
As we bid farewell to another remarkable year for Azure Integration Services, it’s a fitting moment to reflect on the pivotal achievements and transformative experiences that we’ve shared with our partners and customers throughout 2023. In this blog, we’ll revisit the highlights that have shaped an exciting year for enabling digital transformation with Azure Integration Services and set the stage for an exciting future. This year-in-review serves not only as a celebration of our shared accomplishments but also as a testament to the resilience, dedication, and spirit that continue to propel our organization forward.
In addition, we’d like to extend our warmest wishes to you all for a joyous and peaceful holiday season. May this time be filled with happiness, connection, and moments of reflection, as we look ahead to an exciting and innovative 2024.
New services
Azure Integration Environment
This year we introduced a new Azure service that provides unified experience to help customers effectively manage and monitor their integration resources. Azure Integration Environment (in public preview) enables customers to organize their resources logically, providing business context and reducing overall complexity, while also allowing the flexibility to use integration environments in a way that aligns with internal standards and principles.
Azure Logic Apps workflow assistant
Bringing the power of AI to Azure Logic Apps for the first time, this workflow assistant (in public preview) answers any questions you have about Azure Logic Apps from directly within the designer. Currently in public preview, this chat interface provides access to Azure Logic Apps documentation and best practices without requiring you to navigate documentation or search online forums.
Delivering new capabilities for Azure API Management
We are committed to ongoing investment, and here are a few highlights of our latest features that that will continue to drive impactful business outcomes for our customers worldwide:
Microsoft Copilot for Azure: Now in public preview, Microsoft Copilot for Azure introduces policy authoring capabilities for Azure API Management. Easily create policies that match your specific requirements without knowing the syntax or have already configured policies explained to you. Simply instruct Copilot for Azure in the context of API Management policy editor to generate policy definitions, copy the results into the editor, and make any necessary adjustments. Ask questions to gain insights into different options, modify the provided policy, or clarify the policy you already have. Explore more about this new capability here.
Centralized API Discovery and Governance: To tame the proliferating and fragmented API landscape, we introduced the Microsoft Azure API Center (in public preview). As part of the Azure API Management platform, customers can now consolidate their APIs in a centralized location for discovery, reuse, and governance.
Defender for APIs: With the advent of security-first mindset for all enterprise apps, we have integrated a comprehensive defense-in-depth solution into our API management platform. The introduction of Defender for APIs (GA), seamlessly integrated with Azure API Management, empowers security teams to leverage the Defender for Cloud portal.
New Azure API Management v2 pricing tiers: To make API Management accessible to a broader range of customers and offer flexible options for various scenarios, we have introduced a new set of v2 pricing tiers for Azure API Management (in public preview)
…and much more!
There are even more major announcements for Azure Integration Services from 2023, including the reveal of Logic Apps’ new Data Mapper for Visual Studio Code and additional capabilities for Azure Event Grid, that you can read about in our blog post about Microsoft Ignite. Stay up to date with the latest announcements on new products, services, workshops, and more on our Azure Integration Services Blog.
Partner assets
Together with our global ecosystem partners, we work with our customers to drive and scale your successful integration efforts. To help partners like you continue scaling our customer success efforts, we’ve collected workshop assets and other resources from 2023 to help you support new projects.
Forrester Consulting Total Economic Impact™ (TEI) study
To provide businesses with a clearer understanding of the potential ROI when implementing Azure Integration Services, Microsoft commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study for 2023. This comprehensive analysis delves deep into the cost savings and, more importantly, the substantial business advantages that Azure Integration Services can unlock for organizations. Based on a trusted financial model and interviews with business decision makers, the study revealed that Azure Integration Services collectively delivered a substantial 295 percent ROI over a three-year period.
Partner ecosystem: Azure Integration Services Workshop
In our ongoing commitment to support partners like you in scaling customer success, we’ve recently launched the Azure Discovery Workshop for Azure Integration Services. This “campaign-in-a-box” collection offers partners seamless access to all the necessary components for conducting workshops centered around Azure Integration Services. everything essential for a successful workshop execution.
IDC’s “Unleash the Power of APIs” white paper
Sponsored by Microsoft, International Data Corporation (IDC) released a white paper this year titled Unleash the Power of APIs: Best Practices and Strategies for Innovation that outlines why APIs are essential to digital first strategies, the benefits of API management, and factors to consider when choosing an API management solution.
Recognition
We were thrilled this year for Azure Integration Services to be recognized as an industry leader. We believe this is a testament to our deep understanding of customer needs, strong customer engagement and adoption success, and continued investments in customers’ integration needs.
Gartner once again recognized Microsoft as a Leader in the 2023 Magic Quadrant for API Management, marking the fourth consecutive year of this recognition.
Gartner also positioned Microsoft as a Leader in the Magic Quadrant for Enterprise Integration Platform as a Service, Worldwide, for the fifth consecutive year.
Events
In addition to successful product and service launches at the three major Microsoft flagship events—Build, Inspire, and Ignite—we held several smaller workshops and informational sessions in 2023:
In our Unleash the power of APIs: Strategies for innovation webinar, analysts, product leaders, and Microsoft Azure API Management customers discussed how API management can maximize your investments and accelerate your API programs. From security to development, first-hand customer accounts to analyst insights, we discussed why APIs will continue to be important in the future.
The Unleash the Power of a Modern Integration Platform webinar offered a one-of-a-kind chance to engage in live conversations with Azure experts and network with fellow business leaders and peers who are embarking on the journey to app modernization. If you missed the opportunity to attend, don’t worry! You can now access the recorded session and witness firsthand how Azure Integration Services can bring added efficiencies and cost savings to your company through innovation and enhanced agility..
Azure Integration Services Day was an opportunity to hear about new product announcements directly from our product team. Experts behind some of your favorite Azure Integration Services—Azure Logic Apps, API Center, API Management, Service Bus, Event Grid, Messaging, and more—fielded questions from participants and shared the latest updates.
At Integrate 2023, our product team spotlighted our latest announcements and delivered live demos that captivated the global integration community. You can access the session recording here.
Customer success stories
We shared a host of customer stories in 2023—compelling testimonials that offer real-world evidence of Azure Integration Service’s tangible impact. We’re thrilled to showcase our users’ successful implementations but also provide valuable insights to help potential customers envision the specific benefits and solutions that align with their own unique needs.
SPAR NL readies for the future of retail with Azure Integration Services
BÜCHI’s customer-centric vision accelerates innovation using Azure Integration Services
US LBM speeds up orders, unlocks mobile innovation, using Azure Integration Services
Optimal Blue is a leader in the mortgage industry with an API first mindset using Azure API Management
KPMG Netherlands streamlines enterprise integration platforms through Azure Integration Services
Take action
Migrate your Azure Logic Apps Integration Service Environment today to unlock new benefits
As the clock winds down on 2023, it’s still ticking on Azure Logic Apps Integration Service Environment (ISE)! The fully isolated and dedicated environment for running Azure Logic Apps workloads is set to retire on August 31, 2024. Don’t wait until the last minute to migrate your ISE instances and unlock a world of new benefits.
Microsoft Azure is fully committed to guiding you through this migration journey. While the creation of new ISE resources has already ceased, existing ISE resources will continue to receive support until the retirement date. For more information and to get started, read our complete blog about Azure Logic Apps Integration Service Environment migration for details.
Microsoft Tech Community – Latest Blogs –Read More
Access a file from a person’s onedrive once they have resigned and their account is disabled
We need to recover a teams recording from a person who has resigned from the business. The person’s microsoft account has been disabled probably within the last 30 days.
Is there any way that we can recover this mp4 file?
Note that we have global reader access in M365 which allows us to open a person’s onedrive via a link, but only if the users account is active. We also have teams & sharepoint admin access.
Any help will be appreciated
Thanks
Brenton Merrett
We need to recover a teams recording from a person who has resigned from the business. The person’s microsoft account has been disabled probably within the last 30 days. Is there any way that we can recover this mp4 file? Note that we have global reader access in M365 which allows us to open a person’s onedrive via a link, but only if the users account is active. We also have teams & sharepoint admin access. Any help will be appreciated Thanks Brenton Merrett Read More
API for Bing Chat
Hello,
I want to integrate my business server with the Bing chat AI. Is there any kind of access or API that I need to buy. I am searching for it from several days and I am able to find AI search on my Microsoft azure account which is no good.
Thank you !
Hello,I want to integrate my business server with the Bing chat AI. Is there any kind of access or API that I need to buy. I am searching for it from several days and I am able to find AI search on my Microsoft azure account which is no good. Thank you ! Read More
Mandatory attachment field based on check boxes ticked
Hi everyone,
Would like to make the Attachment field mandatory based on check boxes ticked.
No attachments required for Contact Number and Emergency Contact checkbox.
The rest has required attachments.
Submitter may check multiple checkboxes.
If they tick Contact Number and/or Emergency Contact – Attachment field is non-mandatory
If they tick Contact Number and
Hi everyone, Would like to make the Attachment field mandatory based on check boxes ticked. No attachments required for Contact Number and Emergency Contact checkbox.The rest has required attachments. Submitter may check multiple checkboxes. If they tick Contact Number and/or Emergency Contact – Attachment field is non-mandatoryIf they tick Contact Number and Read More
Azure BOT not opening as an app from the new Teams.
My Azure Bot which was working fine as a Teams app in the classic teams is not working from the new teams. The same can be seen working as a chat channel in new teams. Only through app it’s not working.
I am getting the following error, “There was a problem reaching this app.“
We now shifted back to classic teams as a work around. Please let us know what could be issue and the resolution.
Thanks,
My Azure Bot which was working fine as a Teams app in the classic teams is not working from the new teams. The same can be seen working as a chat channel in new teams. Only through app it’s not working. I am getting the following error, “There was a problem reaching this app.” We now shifted back to classic teams as a work around. Please let us know what could be issue and the resolution. Thanks, Read More
LBFO Teaming deprecation on Hyper-V and Windows Server 2022 – Solved
While creating a virtual switch using a teamed interface in Hyper-V for Windows Server 2022, the following error is encountered. To resolve this, NIC teaming for Hyper-V needs to be configured via PowerShell.
Step 1: Delete the existing teaming manually created.
Step 2: Go to PowerShell and run the command:
New-VMSwitch -Name “VMSwitch-1” -NetAdapterName “Embedded NIC 1″,”Embedded NIC 2”
(Here, I have given the switch name ‘VMSwitch-1′ and aggregated it with two adapters—’Embedded NIC 1’ and ‘Embedded NIC 2’ are the adapter names in the list.)
Step 3: Check the algorithm of the VMSwitch command:
Get-VMSwitchTeam -Name “VMSwitch-1” | FL
(This command will display the algorithm. If it’s Hyper-V, proceed to the next step; otherwise, you can ignore the last step.)
Step 4: Set the load balancing algorithm to dynamic:
Set-VMSwitchTeam -Name “VMSwitch-1” -LoadBalancingAlgorithm Dynamic
(This command changes the load balancing algorithm to dynamic. Test it using the command in step 3. The teamed interface should now appear in the Hyper-V virtual switch.)
That’s It…..
While creating a virtual switch using a teamed interface in Hyper-V for Windows Server 2022, the following error is encountered. To resolve this, NIC teaming for Hyper-V needs to be configured via PowerShell.Step 1: Delete the existing teaming manually created.Step 2: Go to PowerShell and run the command: New-VMSwitch -Name “VMSwitch-1” -NetAdapterName “Embedded NIC 1″,”Embedded NIC 2” (Here, I have given the switch name ‘VMSwitch-1′ and aggregated it with two adapters—’Embedded NIC 1’ and ‘Embedded NIC 2’ are the adapter names in the list.)Step 3: Check the algorithm of the VMSwitch command: Get-VMSwitchTeam -Name “VMSwitch-1” | FL (This command will display the algorithm. If it’s Hyper-V, proceed to the next step; otherwise, you can ignore the last step.)Step 4: Set the load balancing algorithm to dynamic: Set-VMSwitchTeam -Name “VMSwitch-1” -LoadBalancingAlgorithm Dynamic (This command changes the load balancing algorithm to dynamic. Test it using the command in step 3. The teamed interface should now appear in the Hyper-V virtual switch.) That’s It….. Read More
Automatic archive treated by exchange account on Outlook 2021
With outlook 2021 who to set up automatic archive different by exchange account ?
any idea ?
Thank you
PL
With outlook 2021 who to set up automatic archive different by exchange account ?any idea ?Thank youPL Read More
Do you know the IMLOG2 Excel function?
Recently I’ve been exploring some more unknown Excel functions. As such I found out there are many functions in Excel to work with complex numbers. One of these is IMLOG2 which performs a base 2 logarithm on a complex number. Feel free to check out this video if you want to learn more: https://youtube.com/watch?v=ODkzawcrf5c
What are some Excel functions you didn’t know existed?
Recently I’ve been exploring some more unknown Excel functions. As such I found out there are many functions in Excel to work with complex numbers. One of these is IMLOG2 which performs a base 2 logarithm on a complex number. Feel free to check out this video if you want to learn more: https://youtube.com/watch?v=ODkzawcrf5c What are some Excel functions you didn’t know existed? Read More
How to identify the firewall filter based on ID
Hi,
We started to have strange problem and looks like Windows Firewall start blocking traffic even there is rules for the traffic.
When I run the command
netsh wfp show netevents
I found from the XML file what this generates the following drop related to my traffic:
Hi,We started to have strange problem and looks like Windows Firewall start blocking traffic even there is rules for the traffic. When I run the command netsh wfp show netevents I found from the XML file what this generates the following drop related to my traffic:<item><filterId>1910059</filterId><subLayer>FWPP_SUBLAYER_INTERNAL_FIREWALL_WF</subLayer><actionType>FWP_ACTION_BLOCK</actionType></item> Anybody knows how to identify what is this filter? Read More
Problems with permission settings in SSRS
I’m trying to browse the ssrs web portal url on another device but even though I’m using an administrator account it still says I don’t have enough permissions, does anyone know what I should do?
I’m trying to browse the ssrs web portal url on another device but even though I’m using an administrator account it still says I don’t have enough permissions, does anyone know what I should do? Read More
Most of my messages go to my Important box and the others don’t arrive at all.
Since I cleaned out my temporary files and added one rule (done at the same time), my incoming messages stopped arriving in my Inbox in Outlook — sporadically at first, now not at all. For the past few days many of them go to my Important folder, and others don’t arrive at all. My sent messages all appear correctly in my Sent folder.
I checked the few rules I had, and they were fine. Just to be sure, I deleted all my rules but the problem persists.
All my messages arrive normally on my cell phone and in the Gmail app.
Help! Thank you.
Since I cleaned out my temporary files and added one rule (done at the same time), my incoming messages stopped arriving in my Inbox in Outlook — sporadically at first, now not at all. For the past few days many of them go to my Important folder, and others don’t arrive at all. My sent messages all appear correctly in my Sent folder.I checked the few rules I had, and they were fine. Just to be sure, I deleted all my rules but the problem persists.All my messages arrive normally on my cell phone and in the Gmail app.Help! Thank you. Read More
how to get reference in app designer
i’m in Predator_Equity.Mlapp and i create Lista_strategieAggreg using:
Lista_stratzegieAggreg(app);
it’s possible to receive referenze of this app?
example: xx=Lista_stratzegieAggreg(app);
but it doesn’t run
Why do I need it? because when I create Lista_strategie Aggreg I want to update a value in the panel (where there is the red arrow in the photo I have to update the text box) and to do so I need the referencei’m in Predator_Equity.Mlapp and i create Lista_strategieAggreg using:
Lista_stratzegieAggreg(app);
it’s possible to receive referenze of this app?
example: xx=Lista_stratzegieAggreg(app);
but it doesn’t run
Why do I need it? because when I create Lista_strategie Aggreg I want to update a value in the panel (where there is the red arrow in the photo I have to update the text box) and to do so I need the reference i’m in Predator_Equity.Mlapp and i create Lista_strategieAggreg using:
Lista_stratzegieAggreg(app);
it’s possible to receive referenze of this app?
example: xx=Lista_stratzegieAggreg(app);
but it doesn’t run
Why do I need it? because when I create Lista_strategie Aggreg I want to update a value in the panel (where there is the red arrow in the photo I have to update the text box) and to do so I need the reference how to get reference in app designer MATLAB Answers — New Questions
Discrete filter bode shows wrong results
I have a discrete Filter consisting of the difference of two FIR filters. One filter is multiplied with 0.5, the other is delayed by z^-1. But at the end the bode diagram seems not correct and also when I let myself display the resulting function, it is not what I have calculated. Where is my fault?
Ts = 1;
FIR1 = tf([-0.1675, 0.1750, 0.3500, -0.2250, -0.1675, -0.1250], 1, Ts);
FIR2 = tf([0.1250, -0.5850, 0.8500, -0.3000, 0.0500, -0.5850, -0.1250], 1, Ts);
H_1 = FIR1 * tf(1, [1 0], Ts);
H_2 = 0.5 * FIR2;
H_diff = H_1 – H_2;
figure;
bode(H_diff)I have a discrete Filter consisting of the difference of two FIR filters. One filter is multiplied with 0.5, the other is delayed by z^-1. But at the end the bode diagram seems not correct and also when I let myself display the resulting function, it is not what I have calculated. Where is my fault?
Ts = 1;
FIR1 = tf([-0.1675, 0.1750, 0.3500, -0.2250, -0.1675, -0.1250], 1, Ts);
FIR2 = tf([0.1250, -0.5850, 0.8500, -0.3000, 0.0500, -0.5850, -0.1250], 1, Ts);
H_1 = FIR1 * tf(1, [1 0], Ts);
H_2 = 0.5 * FIR2;
H_diff = H_1 – H_2;
figure;
bode(H_diff) I have a discrete Filter consisting of the difference of two FIR filters. One filter is multiplied with 0.5, the other is delayed by z^-1. But at the end the bode diagram seems not correct and also when I let myself display the resulting function, it is not what I have calculated. Where is my fault?
Ts = 1;
FIR1 = tf([-0.1675, 0.1750, 0.3500, -0.2250, -0.1675, -0.1250], 1, Ts);
FIR2 = tf([0.1250, -0.5850, 0.8500, -0.3000, 0.0500, -0.5850, -0.1250], 1, Ts);
H_1 = FIR1 * tf(1, [1 0], Ts);
H_2 = 0.5 * FIR2;
H_diff = H_1 – H_2;
figure;
bode(H_diff) bode, discrete, system MATLAB Answers — New Questions
Why do I get errors with compiling even after I have installed the supported compilers Microsoft Visual Studio 2010 Express and Windows SDK 7.1?
I have a 64-bit MATLAB 7.13 (R2011b) and would like to use commands like MEX, MCC and MBUILD which require a supported compiler. I have the officially supported compilers for this release, Microsoft Visual Studio 2010 SP1 Express and Windows SDK 7.1. However, I cannot compile as I get various errors during either the
mex -setup
mbuild -setup
processes or when trying to compile the code. The MEX errors are:
ERROR: "… error using mex (line 206), unable to complete successfully…"
and the MBUILD errors are:
ERROR: Could not find the compiler "cl" on the DOS path.
Use mbuild -setup to configure your environment properly.
C:PROGRA~1MATLABR2011BBINMEX.PL: Error: Unable to locate compiler.
Error using mbuild (line 189)
Unable to complete successfully.
What is going on?I have a 64-bit MATLAB 7.13 (R2011b) and would like to use commands like MEX, MCC and MBUILD which require a supported compiler. I have the officially supported compilers for this release, Microsoft Visual Studio 2010 SP1 Express and Windows SDK 7.1. However, I cannot compile as I get various errors during either the
mex -setup
mbuild -setup
processes or when trying to compile the code. The MEX errors are:
ERROR: "… error using mex (line 206), unable to complete successfully…"
and the MBUILD errors are:
ERROR: Could not find the compiler "cl" on the DOS path.
Use mbuild -setup to configure your environment properly.
C:PROGRA~1MATLABR2011BBINMEX.PL: Error: Unable to locate compiler.
Error using mbuild (line 189)
Unable to complete successfully.
What is going on? I have a 64-bit MATLAB 7.13 (R2011b) and would like to use commands like MEX, MCC and MBUILD which require a supported compiler. I have the officially supported compilers for this release, Microsoft Visual Studio 2010 SP1 Express and Windows SDK 7.1. However, I cannot compile as I get various errors during either the
mex -setup
mbuild -setup
processes or when trying to compile the code. The MEX errors are:
ERROR: "… error using mex (line 206), unable to complete successfully…"
and the MBUILD errors are:
ERROR: Could not find the compiler "cl" on the DOS path.
Use mbuild -setup to configure your environment properly.
C:PROGRA~1MATLABR2011BBINMEX.PL: Error: Unable to locate compiler.
Error using mbuild (line 189)
Unable to complete successfully.
What is going on? visual, studio, 2010, sp1, express, professional, windows, sdk, 7.1, mex, mbuild, unable, to, complete, successfully, locate, compiler, cl MATLAB Answers — New Questions
Several Y axis in the same plot
Hi everyone!
I am trying to make a plot including one rigth Y axis and two left Y axis. I was taking a look to the function ‘addaxis’, I have an issue when using this function, related to the ‘aa_splot’. When addaxis calls ‘aa_splot’ MATLAB send me this error:
Error in aa_splot (line 18) set(gca,’ColorOrder’,cord(mod([0:6]+1,7)+1,:));.
I also had a previos error related with ‘aa_splot’, and had to changue the line 13 like this: cord = get(gca,’ColorOrder’); (I think this was fault of my MATLAB version: R2022b).
Thanks in advance!Hi everyone!
I am trying to make a plot including one rigth Y axis and two left Y axis. I was taking a look to the function ‘addaxis’, I have an issue when using this function, related to the ‘aa_splot’. When addaxis calls ‘aa_splot’ MATLAB send me this error:
Error in aa_splot (line 18) set(gca,’ColorOrder’,cord(mod([0:6]+1,7)+1,:));.
I also had a previos error related with ‘aa_splot’, and had to changue the line 13 like this: cord = get(gca,’ColorOrder’); (I think this was fault of my MATLAB version: R2022b).
Thanks in advance! Hi everyone!
I am trying to make a plot including one rigth Y axis and two left Y axis. I was taking a look to the function ‘addaxis’, I have an issue when using this function, related to the ‘aa_splot’. When addaxis calls ‘aa_splot’ MATLAB send me this error:
Error in aa_splot (line 18) set(gca,’ColorOrder’,cord(mod([0:6]+1,7)+1,:));.
I also had a previos error related with ‘aa_splot’, and had to changue the line 13 like this: cord = get(gca,’ColorOrder’); (I think this was fault of my MATLAB version: R2022b).
Thanks in advance! axis, plot, addaxis MATLAB Answers — New Questions