Category: Microsoft
Category Archives: Microsoft
Upgrade SP2016 to hybrid with one drive
Hi,
My organization is currently running on SP2016 on-premises without one drive access. Recently, we have kicked off discussions to upgrade SP2016 to Hybrid with one drive for file storage.
We use MS365 for teams and other desktop apps such as excel, word, PowerPoint etc. I was wondering if we go down the path of upgrading to hybrid solution, then:
Do we need to setup one drive in hybrid environment?
Can we use one drive to replace network shares for file storage?
what sort of configuration do we need to put in place to store sensitive documents with security labels?
Any input will be highly appreciated.
Thanks
Hi, My organization is currently running on SP2016 on-premises without one drive access. Recently, we have kicked off discussions to upgrade SP2016 to Hybrid with one drive for file storage. We use MS365 for teams and other desktop apps such as excel, word, PowerPoint etc. I was wondering if we go down the path of upgrading to hybrid solution, then: Do we need to setup one drive in hybrid environment?Can we use one drive to replace network shares for file storage?what sort of configuration do we need to put in place to store sensitive documents with security labels? Any input will be highly appreciated. Thanks Read More
Recommend me a safe & best YouTube to mp3 converter for my Windows 11
I’m currently on the lookout for a reliable and safe YouTube to MP3 converter that works seamlessly on Windows 11. I need a tool that can efficiently convert videos to high-quality audio files, ideally without compromising on sound quality. Additionally, it’s important that the software is user-friendly and free from excessive ads or security risks. If anyone has recommendations for such a tool, especially one that has proven to be dependable over time, please share your experiences. Your input will be greatly appreciated as it will help streamline my workflow and enhance my multimedia projects.
I’m currently on the lookout for a reliable and safe YouTube to MP3 converter that works seamlessly on Windows 11. I need a tool that can efficiently convert videos to high-quality audio files, ideally without compromising on sound quality. Additionally, it’s important that the software is user-friendly and free from excessive ads or security risks. If anyone has recommendations for such a tool, especially one that has proven to be dependable over time, please share your experiences. Your input will be greatly appreciated as it will help streamline my workflow and enhance my multimedia projects. Read More
Why is My Quick-Books pay-roll update not working after update?
Quick-Books pay-roll update not working. Getting errors during the update process. How can I resolve this issue and ensure my pay-roll updates successfully?
Quick-Books pay-roll update not working. Getting errors during the update process. How can I resolve this issue and ensure my pay-roll updates successfully? Read More
How to Fix Quick-Books Error 1920 after windows update?
What is Quick-Books Error 1920, and how can I fix it? I’m having trouble installing Quick-Books Desktop due to this error.
What is Quick-Books Error 1920, and how can I fix it? I’m having trouble installing Quick-Books Desktop due to this error. Read More
Exchange admin center Delegation permissions error
I am having a strange issue with Exchange Admin Centre
I am signed in as a Global Administrator but when I go to a user’s mailbox and then to the delegation tab I get the following error.
Error:
I am having a strange issue with Exchange Admin Centre I am signed in as a Global Administrator but when I go to a user’s mailbox and then to the delegation tab I get the following error. Failed to get mailbox permissionsError:User is not allowed to call Get-MailboxPermission Is this a bug? I wouldn’t think it was a permissions issue as I have been fine for years with my current ones. Read More
How to Resolve Quick-Books Database Manager failed to start issue after update?
I am facing an issue where the Quick-Books Database Manager failed to start. What could be causing this, and what are the possible troubleshooting steps to resolve it?
I am facing an issue where the Quick-Books Database Manager failed to start. What could be causing this, and what are the possible troubleshooting steps to resolve it? Read More
Platform engineering: Monitor Backstage with Application Insights
The platform engineering journey requires abundant information to make informed decisions. Understanding how developers use the platform—how frequently and for how long—is invaluable. Since the internal developer portal (IDP) serves as the central hub for developers’ regular tasks, it becomes the ideal location for monitoring these activities and collecting essential data.
Backstage, a common IDP implementation, provides an excellent opportunity to demonstrate how to integrate monitoring. In this article, we’ll explore how to add monitoring to the Backstage portal using Azure Monitor, specifically Application Insights.
Upon reviewing the Backstage documentation, we find that the portal is already instrumented using OpenTelemetry. Furthermore, Application Insights is a supported provider for OpenTelemetry.
This is excellent news, as it streamlines the process and saves time that would otherwise be spent manually adding tracing calls to the Backstage code.
To enable OpenTelemetry instrumentation, follow the steps in the official documentation(https://backstage.io/docs/tutorials/setup-opentelemetry/). Start by adding the required packages using the following command:
yarn –cwd packages/backend add
@opentelemetry/sdk-node
@opentelemetry/auto-instrumentations-node
/monitor-opentelemetry-exporter
Next, create a new file named instrumentation.js inside the backend/src folder.
The content of the file should be:
const { NodeSDK } = require(‘@opentelemetry/sdk-node’);
const {
getNodeAutoInstrumentations,
} = require(‘@opentelemetry/auto-instrumentations-node’);
const { AzureMonitorTraceExporter } = require(“@azure/monitor-opentelemetry-exporter”);
const { AzureMonitorMetricExporter } = require(“@azure/monitor-opentelemetry-exporter”);
// Create an exporter instance
const azTraceExporter = new AzureMonitorTraceExporter({
connectionString:
process.env[“APPLICATIONINSIGHTS_CONNECTION_STRING”] || “<YourAppInsightsConnectionString>”
});
const sdk = new NodeSDK({
traceExporter: azTraceExporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
And with that…we are done!
Browse some pages within the Backstage portal and in a few minutes, telemetry will be available in Application Insights.
Here are some samples of information that has been gathered. All this information is out-of-the-box reports, so without any other customization, we can obtain a wealth of information.
We can check if the portal is returning errors, if the response time is within acceptable parameters, how many requests are made and the usage pattern…
Examine the details of a request to Backstage, the time taken to serve the request and the time spent on all dependencies…
Retrieve information from the accessed URLs, for example, to identify the most common searches and ascertain developers’ interests.
Microsoft Tech Community – Latest Blogs –Read More
Matter WakeUp System AI
Project Overview
Developed in partnership with UCL, Great Ormond Street Hospital (GOSH) and Intel, the WakeUp System provides reassurance and support to patients with disabilities in long-term care and the elderly. As mentioned previously, one of the system’s main objectives is to enable them to autonomously control their environment, and it is designed for use in both hospital and home settings. Moreover, the project encompasses a research study in Intel’s new OpenVINO project.
The system architecture is divided into three parts: triggers, targets and the WakeUp system. The WakeUp system acts as a message broker, connecting a patient’s commands to an appropriate target device and enabling actions to be performed. The system can be configured by a healthcare professional or by the patient using a graphical user interface (GUI).
The WakeUp System represents a promising idea in healthcare technology innovation. It has the potential to improve the quality of life and efficiency of healthcare services for disabled patients. This report provides a comprehensive overview of the design concept, the technical implementation of the project and the potential impact it could have on healthcare.
Demo Video
Project Journey
This project was completed over the course of 6 months. The first three months, the team focused on the requirements engineering portion of the project, where we set functional and non-functional requirements, created context and architecture diagrams and broke the project down with the stakeholders, so that it was easy for us to implement in the following 3 months, making sure we included the most important features and requirements. This process also allowed for the team to see how much is realistically achievable, and what should be kept as optional if time allowed us to complete.
For the implementation stage, the team utilised agile methodologies, sprints, continuous integration and continuous testing and git practices. Agile was the software development methodology chosen for this project as the team and stakeholders maintained constant communication; meetings every week, and we wanted continuous feedback on design and implementation. This way we were able to develop a successful product. At the start of the week, the team would update stakeholders on progress and decide what needs to be done before the next meeting. This would then be completed. We would also have our own internal meetings, to demonstrate work to other team mates and see how much progress has been made.
Technical Details
During the development process, the project team used a variety of software development tools and practices. Continuous integration processes were incorporated to support the reliability and scalability of the system. In particular, the project utilised advanced technologies such as Intel OpenVINO, which significantly improves the performance of the AI algorithms used within the system.
To build the GUI, Microsoft Foundation Classes (MFC) was utilised. All the data edites within the MFC application by staff will be saved within the database to ensure long-term storage of application data. There are many features including managing users, managing devices, managing signals and exporting data usage.
Triggers
In this section we show a visual representation of each trigger we have developed, as well as a video describing our project.
Tapping/Snapping Fingers Detection
The above gif demonstrates our sound classification trigger. It works by detecting these sounds repeatedly within a specified timeframe and then sending a signal to the WakeUp system. You can see in the above gif, the smart plug is turned on after a finger snapping is detected.
Eye Blinking Detection
As the gif demonstrates, our eye blinking detection triggers track the movement of the eye and constantly calculate the distance between upper and lower eyelids and count an eye blinking whenever the distance between eyelids is below the preset threshold. It counts the number of eye blinks over a certain period, with the count serving as a signal in the Wakeup System. For example, this trigger can be used for turning on lights or opening curtains.
Morse Code Vision
This gif shows our Morse code vision trigger. Based on the previous eye blinking trigger , we introduced a threshold to differentiate long and short blinking, allowing for Morse code communication. The gif shows after a long blink to initiate trigger, the three continuous short blinking has analysed to be the letter “S”. While a bit more complex for patients to use, it enables the transmission of letter signals compared to simple blinking detection, which allows the user to use at least 27 different actions for a single trigger. For example, a nurse can set up for the letter T to turn on the TV.
Fall Detection
This trigger detects if a patient is falling from a seated position by tracking upper body keypoints with the YOLOv8n-pose model, optimised with OpenVINO. A signal is generated if the patient’s body angle becomes too large, which can then prompt actions such as sending a message and photo to a nurse. The gif above showing the trigger constantly tracks the angle of the patient’s body and shows falling text when the patient’s body angle goes above the preset threshold.
Results and Outcomes:
OpenVino Benchmark:
As one of our key functional requirements, OpenVINO toolkit boost the inference performance of our triggers on edge devices, allowing us to run more capable models with limited computational power. To critically assess the performance boost provided by OpenVINO, we conducted a detailed benchmark on both triggers utilising OpenVINO, namely upper body fall detection and Whisper.
Upper Body Fall Detection
To assess the performance boost provided by OpenVINO, we conducted a benchmark for our upper body fall detection system. The system uses the YOLOv8n-pose model, comparing the performance with and without OpenVINO integration. We evaluated both live and offline inference performance. For live inference, we performed a continuous 3-minute test, while for offline inference, we assessed performance using the COCO 2017 validation dataset. Our comparisons focused on the mean boot time, pre-processing time, post-processing time, and inference time.
The benchmark was carried out on a computer running Ubuntu 22.04, equipped with an Intel i7-13650HX processor and 16GB of RAM.
The results, as depicted in the figures, reveal that despite OpenVINO slightly increased boot time and slightly improved on pre-processing and post-processing times, it made a huge improvement on the bottleneck of the model running time, which is average inference time. Notably, OpenVINO reduced the inference time by an average of 11 ms. Overall, including pre-processing and post-processing, the OpenVINO reduced the average latency by approximately 25%.
Whisper
To assess the performance boost provided by OpenVINO for Whisper, we conducted a benchmark for our audio transcript trigger, which utilises Whisper model. We evaluated both live and offline inference performance. The live benchmark included an hour of live inference, while for offline inference involved processing a 30-minute audio file. Our comparisons focused on the average inference time.
The improvements on Whisper model here are equally notable as for fall detection. For offline , OpenVINO cut the average inference time by 22%. And for live, we observed a remarkable 56% performance boost, with average inference time dropping from 1.2 seconds to just 0.5 seconds.
Evaluation of Wakeup System with Stakeholders and Professionals:
The WakeUp system has been evaluated through live, real-time demonstrations in front of two different audiences. The first demonstration took place during the FHIR Hackathon on February 2024, where we presented two signals. The first signal used blinking eyes as triggers to toggle a light bulb, while the second signal triggered an alert via Telegram when the user being monitored fell off a chair. These demonstrations were given to representatives from companies such as Roche, Microsoft and GOSH Drive.
The second demonstration took place during Labs Day on 19 March, where we demonstrated Morse Vision and Morse Sound to teams from Intel and to the CTO of NTT DATA.
All the live demonstrations went smoothly and there were no embarrassing glitches. All participants who tested the system were impressed and showed great interest in the product. They suggested various ideas on how this concept could be applied to other industries such as nuclear power plants, farms or theatres.
From the WakeUp team’s perspective, we would like the setup process to be simpler. For example, to add a trigger, you first have to register it with the WakeUp system and then share the credentials with the trigger. This could be streamlined if the trigger could automatically register itself, although this might compromise the security of our system.
We are also pleased at what started as an ‘exploratory’ project has quickly developed into an idea that has been presented to a real audience.
Lesson Learned:
Throughout the course of the project, we have been left with invaluable lessons about teamwork, communication and strategy. Some of the many lessons learnt include stakeholder and group communication, git practices and exposure to a variety of different technologies.
Stakeholder and group communication: As this project was in partnership with different stakeholders, we maintained continuous communication in order to produce a system which would best match their requirements. Weekly meetings would occur, where we would update stakeholders on progress and show what has been developed over the last week. As we used agile methodologies, we learnt that the requirements can change anytime during the project, and we should welcome the changes, as the focus is always on the stakeholder’s value. Communication is vital to ensure progress is made and it is very important to ensure all the voices of the stakeholder and team members are heard to be able to produce a successful product.
Git Practices: During this project, new concepts had been used such as squashing commits, git rebase, pull requests and git merge. Exposure to these concepts has further improved the knowledge of Git to us, and has given us the experience of professional git usage.
Exposure to different technologies: Over the duration, we worked with a range of technologies and concepts include Home Assistant, integration of Matter devices, machine learning integration, ZeroMQ and we also learnt how to build executable files. The exposure to all these technologies and new concepts has provided some professional programming practice, and as a result, improved the style of our code.
Collaboration and Teamwork
Team member contributions:
This project will be impossible without the collaboration between 9 of our amazing team members. The collaboration and teamwork has played a vital role in the success of this project. To ensure the smooth operation for our nine-person project, we have made a clear split of work between team members, which are shown below:
Frischer David Roman Louis (Team Leader): Team management, Stakeholder communication, development of Morse Vision and Morse Sound Trigger.
Hussain Fatima: Development of MFC front end, production of presentation video, development of WakeOnLan target.
Jakupov Dias: Development of sound based triggers, integration of OpenVINO with sound based triggers.
Kang Weihao: Development of Swagger documentation.
Rzayev Javid: Development of fall detection triggers.
Sivayoganathan Thuvaragan: Development of data statistics of WakeUp system.
Sun Ethan: Development of PyTest for continuous integration.
Wang Jingyuan: Development of Eye blinking trigger, integration of OpenVINO with vision based trigger, production of presentation video.
Wang Zena: Production of MFC front end.
Project Management
To ensure the smooth operation of our nine-person project, we have established a rigorous methodology and set of rules for project management and teamwork.
Throughout the 10 weeks of development, we held three weekly meetings to maintain the ongoing progress of the project. Meetings with our UCL supervisors were scheduled for Mondays, internal meetings to organise the weekly tasks took place on Wednesdays, and update meetings with our Intel partner were held on Fridays.
Each member of the WakeUp Team had a specific role to fulfill, with the Team Leader (TL) assisting and ensuring that everyone completed their tasks on schedule. To promote collaboration and integration among team members, strict common practices were implemented. For example:
Commit Message: Must follow this template: [Subject] Description of the task achieved. Following this convention made it easier to track the progress of each component in our mono-repository on GitHub.
Pull Request review: Only the TL was authorised to push to the main branch. Other members were required to create a pull request, where one or two members would review it to ensure that the commit message followed convention, that there was a single commit (squash) to be rebased and merged into main, and that the commit had passed the project’s pytest to ensure that it would not break other components of the system.
Notion App: The TL motivated all team members to document their work progress and knowledge gained throughout the project in the Notion workspace. Information about organisation, setup guides, troubleshooting, and learning was recorded on the Notion page to centralise the team’s knowledge.
Future Development
The Matter WakeUp Signal System holds significant promise for future development and enhancements. Here are some potential areas for improvement:
Simplified Setup and Installation
Introducing a graphical installation process would make it easier for non-technical users to configure and use the system. This would streamline the setup process, reducing the potential for errors and making the system more accessible.
Advanced and Varied Triggers
Integrating more advanced machine learning models for precise patient monitoring and interaction could enhance the system’s capabilities. This includes expanding the range of triggers to accommodate various patient needs and conditions.
Broader Device Compatibility
Expanding the range of compatible smart devices and ensuring seamless integration with other healthcare systems and standards is crucial. Future testing should include a wider array of smart devices like smart speakers and additional sensors.
Conclusion
The Matter WakeUp Signal System project demonstrates the transformative potential of integrating smart technology into healthcare settings. Here are the key points:
Patient Autonomy: By enabling patients with disabilities and the elderly to control their environment autonomously, the system significantly improves their quality of life.
Workload Reduction: The system reduces the workload on nurses and hospital stuff by automating routine tasks and monitoring.
Innovative Approach: The successful implementation of various triggers, the integration of Matter devices, and the use of advanced AI technologies like OpenVINO showcase the project’s innovative approach.
Positive Feedback: The system has received positive feedback from stakeholders and has shown practical applications and potential impact through successful demonstrations.
The project’s commitment to enhancing patient autonomy and improving healthcare efficiency makes it a valuable addition to the field of healthcare technology.
Special Thanks to Contributors:
Each contributor’s continuous support and involvement all plays a crucial role in the success of the project, here, we present a special thanks to all following contributors.
Prof. Dean Mohamedally, Chief Supervisor and Professor of Computer Science at UCL
Emmanuel Letier, Associate Professor in Software Engineering at UCL
Costas Stylianou, Senior Technical Specialist for Public Sector and Health & Life Sciences at Intel, and Honorary Associate Professor at UCL
Prof. Julia Manning, Professor, Clinician, Policy Adviser, Convenor
Dr. Rafiq, NHS GP / GP Trainer / Honorary Lecturer
Anelia Gaydardzhieva, Assistant Supervisor at UCL
Jason Holtom, Account Executive for the Public Sector and Healthcare
Call to Action
We invite you to explore the Matter WakeUp Signal System further and consider how its innovative approach to patient care can be applied in your own healthcare settings. Here are some steps you can take:
Connect with Us: Feel free to reach out to our team for more information or collaboration opportunities. Together, we can continue to innovate and improve the quality of care for patients.
Team
The team involved in developing this project included 9 members. All of us are Masters students at UCL studying Software Systems Engineering:
David Frischer – Team Leader – Full Stack Developer.
GitHub URL: https://github.com/davidfrisch,
LinkedIn URL: https://www.linkedin.com/in/david-frischer/
Fatima Hussain – Full Stack Developer.
GitHub URL: https://github.com/fatimahuss,
LinkedIn URL: https://www.linkedin.com/in/fatima-noor-hussain/
Dias Jakupov – Full Stack Developer.
GitHub URL: https://github.com/Dias2406/,
LinkedIn URL: https://www.linkedin.com/in/dias-jakupov-a05258221/
Jingyuan Wang – Full Stack Developer.
GitHub URL: https://github.com/Andydiantu,
LinkedIn URL: https://www.linkedin.com/in/jingyuan-wang-9ba553208/
Weihao Kang – Full Stack Developer.
GitHub URL: https://github.com/Kang2001,
LinkedIn URL: https://www.linkedin.com/in/weihao-kang-b25152294
Thuvaragan Sivayoganathan – Full Stack Developer.
GitHub URL: https://github.com/thuvasiva,
LinkedIn URL: https://www.linkedin.com/in/thuvaragan-sivayoganathan-a95991227/
Javid Rzayev – Full Stack Developer.
GitHub URL: https://github.com/Javid2002,
LinkedIn URL: https://www.linkedin.com/in/javidrzayev/
Ethan Sun – Full Stack Developer.
GitHub URL: https://github.com/EthanSun11
Zena Wang – Full Stack Developer.
GitHub URL: https://github.com/ZenaWangqwq
#Accessibility #Innovation #SmartHome #TechnologyForGood #Matter #HomeAssistant #Micrososft #Intel #UCL #IXN
Microsoft Tech Community – Latest Blogs –Read More
Wide Community Reach at Modern Workplace Conference Paris
The Modern Workplace Conference Paris 2024, primarily aimed at French-speaking users, has been held. In its sixth edition, the conference featured various speakers, including Microsoft MVPs, Regional Directors, industry technology experts, and Microsoft employees. They delivered more than 75 sessions over two days, providing attendees with the latest technical information on Microsoft 365, Power Platform, including Microsoft AI and Copilot.
In this blog, we introduce the insights and future community activities of two MVPs who contributed significantly to the success of this conference.
Microsoft MVP, Chloé Moreau, describes her own role at the conference as “a big project manager or a conductor,” working alongside talented MVPs and passionate community leaders to ensure the best planning and execution. Although they are highly motivated community leaders rather than professional event organizers, they tackled challenges head-on and were proud of delivering a high-quality event. “Our best asset is that we are passionate people loving to share content and knowledge on our preferred technologies, for MWCP it is Microsoft 365 and Power Platform,” says Chloé.
Two years ago, the fourth event was held entirely online due to the limitations of pandemic, which over the following years, transformed into a hybrid format. Hybrid events enable extensive information sharing about new technologies not only to on-site attendees but also to remote participants. However, managing multiple communication channels makes the operation more complex. For this event, in addition to the on-site venue, organizer members managed live streaming on YouTube and used Teams for sharing slides and facilitating participant interaction. Reflecting on these efforts, Chloé states, “We were able to offer a great quality of interactions both online and in person by really proving all hybrid options to our attendees.”
Regarding concerns that offering more participation options might reduce the number of people attending in person, Chloé mentions, “This year we had a very strong level of attendees participating in person despite all content being accessible on YouTube. So, we are convinced that hybrid is the way to go for the future as Microsoft showed this for Microsoft Ignite for example.”
Chloé expresses a strong desire to continue supporting community members through various online channels. In addition to sharing the latest Microsoft news on LinkedIn, Facebook, X, and Instagram, the community hosts groups with over 6,500 professionals worldwide, monthly podcasts, and various community events that have attracted over 10,000 participants across five continents in the past eight years. By continuously supporting participants who meet at events, community leaders continue to demonstrate exemplary leadership that contributes to the growth of both individuals and the community as a whole.
Another key organizer, Patrick Guimonet, also an MVP, played a significant role in coordinating numerous aspects of the event from planning to execution, greatly contributing to its success. He expresses his joy, saying, “I am very proud to contribute to the success of this event and to be part of such a passionate, motivated, and committed team. Each edition is a new adventure that allows us to unite people and share our passion.”
Through interactions with participants, Patrick learned the importance of session diversity and innovation. He discovered that participants are not just looking for various presentations but are interested in practical case studies that provide specific knowledge. This feedback, offering practical insights into both business and technical challenges, was highly valuable.
Like Chloé, Patrick also reflects on the hybrid nature of the event, emphasizing that the YouTube session videos met participant expectations, stating, “This flexibility allows them to review sessions at their leisure, catch up on any content they might have missed, or simply refresh their memory on the information presented.”
Looking ahead to future community activities, Patrick mentioned the regular events hosted by aMP (formerly aMS) community. These events offer opportunities for direct interaction at local events and include online streaming, allowing participation regardless of location. They provide a great hybrid event experience where the community can learn about the latest updates of Microsoft 365, Power Platform, and more. Community events are planned in various cities worldwide, so we encourage you to join these events.
– June 25, 2024: aMS Brazzaville
– August 1, 2024: aMP Colombo
– August 24, 2024: aMP Pune
– August 27, 2024: aMP Salem
– September 12, 2024: aMP Tunis
– September 19, 2024: aMP Bordeaux
– September 27, 2024: aMS Leipzig
– October 3, 2024: aMP Montpellier
– October 18, 2024: aMP Seoul
– October 23, 2024: aMP Kuala Lumpur
– October 25, 2024: aMP Manila
Microsoft Tech Community – Latest Blogs –Read More
No Internet after upgrading to v24H2 Release Preview
Upgraded to v24H2 RP and no internet. Have reported this previously on here. I have a Dell Inspiron 5502 about 3 years old. Had to roll back to v23H2.
I cannot report the ‘reason why you are rolling back’ as there is no internet.
I have updated all the drivers via Dell Supportassist. However it looks like the problem is ‘Network services is not running as expected. Network services has stopped’ The service cannot be restarted.
Does anyone have advice on this please?
Upgraded to v24H2 RP and no internet. Have reported this previously on here. I have a Dell Inspiron 5502 about 3 years old. Had to roll back to v23H2. I cannot report the ‘reason why you are rolling back’ as there is no internet. I have updated all the drivers via Dell Supportassist. However it looks like the problem is ‘Network services is not running as expected. Network services has stopped’ The service cannot be restarted. Does anyone have advice on this please? Read More
Trainable Classifiers – Tips
Keyword or metadata values (keyword query language)Previously identified patterns of sensitive information like social security, credit card, or bank account numbers (Sensitive information type entity definitions)Document fingerprinting: recognizing an item because it’s a variation on a templateThe presence of exact strings exact data match
Hello All,Just sharing some tips to assist with the process of data collection and the creation of trainable classifiers for the purpose of labelling/Data Loss prevention. -Regarding training Machine Learning to recognize a certain document type, It must have one or more recognizable aspects. Possible usable recognizable aspects of the data/document type:Keyword or metadata values (keyword query language)Previously identified patterns of sensitive information like social security, credit card, or bank account numbers (Sensitive information type entity definitions)Document fingerprinting: recognizing an item because it’s a variation on a templateThe presence of exact strings exact data match -In the below examples, we focus on Document Fingerprinting and Previously identifiable Sensitive information Type. For e.g.Regarding positive samples, The below file samples display a pattern, CC info (dummy data), Include Keywords referring to CC info such CVV2/AMEX etc…. as well as SSN information. -This can be regarded as a pattern for positive detection. The above data samples (about 150 samples of a similar pattern) are stored in a folder in a dedicated SharePoint Site(In the below screenshot, Same items are used as false samples for another classifier). -Regarding Negative samples, It is the same concept, It can be also stored in a folder in a dedicated Sharepoint Site and have a unique pattern or fingerprint. for e.g. -The below samples represent Credential information (dummy), Need to be about 150 samples or so. The samples should strongly represent a uniform document/data type different from positive samples. Similarly the data is stored in a dedicated folder in a SharePoint Site: Once the trainable classifier is created and fed this information, It will successfully identify data type to facilitate detection and minimize potential false positive. Read More
DockerCompose obsolete version in auto generated file
I use the `DockerCompose@0` task to build docker images. After updating docker compose on the agents I get warnings saying “##[error]time=”2024-06-18T00:06:56+02:00″ level=warning msg=”/data/agents/A1/.docker-compose.1718662015607.yml: `version` is obsolete”
It seems like this auto generated compose file includes the version at the top but I did not find any option to disable that.
I use the `DockerCompose@0` task to build docker images. After updating docker compose on the agents I get warnings saying “##[error]time=”2024-06-18T00:06:56+02:00″ level=warning msg=”/data/agents/A1/.docker-compose.1718662015607.yml: `version` is obsolete” It seems like this auto generated compose file includes the version at the top but I did not find any option to disable that. Read More
Notifications for SharePoint news on desktop teams client – Best Practices?
Hello Community,
I have a customer that has a SharePoint Intranet . They have many, many sites and a home site. On the home site the news from all sites are aggregated. This works fine.
They also see the news in viva connections. This works fine.
They get notifications about new news on their mobile phones. This works fine.
But they don’t get any notifications in the activity feed in teams if there is a new SharePoint news on any of the sites. (Like you get a notification if someóne sends an announcement in Viva Engage)
I did not find any information about such a feature.
I mean I could alway build a flow or something like that.
This would be easy, if I only had one SharePoint site with news, but I have many sites.
And the customer does not want the news to create teams messages, but just to create an activty item.
I am pretty sure this is doable with a flow, but that seems to be quire complicated.
For the Requirement: “Notify me in teams if there is a new SharePoint news”
How would you deal with such a requirement? Any Ideas?
Best Regards,
Sven
Hello Community,I have a customer that has a SharePoint Intranet . They have many, many sites and a home site. On the home site the news from all sites are aggregated. This works fine. They also see the news in viva connections. This works fine.They get notifications about new news on their mobile phones. This works fine.But they don’t get any notifications in the activity feed in teams if there is a new SharePoint news on any of the sites. (Like you get a notification if someóne sends an announcement in Viva Engage)I did not find any information about such a feature.I mean I could alway build a flow or something like that.This would be easy, if I only had one SharePoint site with news, but I have many sites.And the customer does not want the news to create teams messages, but just to create an activty item.I am pretty sure this is doable with a flow, but that seems to be quire complicated. For the Requirement: “Notify me in teams if there is a new SharePoint news”How would you deal with such a requirement? Any Ideas?Best Regards,Sven Read More
I’m missing something very fundamental … sorry for the newbee question
I’ve installed Project Online and Project Desktop app. I have created a Project and can open it on Teams but can’t work out how to open it in the Desktop app. The URL starts with https://project.microsoft.com/…
I have created a project in the desktop app and can see it in my Project dashboard but, it has a different icon, and opens in a different web (looking) app (Project Center). The URL ends with sharepoint.com/sites/pwa/Projects.aspx
What I am trying to do is explore the integration between Project Online and Planner but I seem to have three different products (project online, project server on my sharepoint domain, and planner on teams).
So, I reckon I am missing a key piece of architecture. Would someone mind ignoring me as an idiot and help me understand what’s going on, please? Thank you.
I’ve installed Project Online and Project Desktop app. I have created a Project and can open it on Teams but can’t work out how to open it in the Desktop app. The URL starts with https://project.microsoft.com/…I have created a project in the desktop app and can see it in my Project dashboard but, it has a different icon, and opens in a different web (looking) app (Project Center). The URL ends with sharepoint.com/sites/pwa/Projects.aspx What I am trying to do is explore the integration between Project Online and Planner but I seem to have three different products (project online, project server on my sharepoint domain, and planner on teams). So, I reckon I am missing a key piece of architecture. Would someone mind ignoring me as an idiot and help me understand what’s going on, please? Thank you. Read More
Unleashing PTU Token Throughput with KV-Cache-Friendly Prompt on Azure
1- Introduction
PTUs are reserved processing capacity, ensuring stable performance for uniform LLM workloads. The reserved capacity of PTUs makes KV caching more effective compared to Pay-As-You-Go (PayGo). This blog post delves into the role of Key-Value (KV) caching in enhancing PTU throughput, and practical strategies to create cache-friendly prompts that maximize efficiency.
2- What are Provisioned Throughput Units (PTUs)?
Provisioned Throughput Units (PTUs) in Azure represent a dedicated model processing capacity that can be reserved and deployed for handling prompts and generating completions. The key benefits of PTUs include:
Predictable Performance: Ensures stable maximum latency and throughput for uniform workloads.
Reserved Processing Capacity: Once deployed, the throughput is available irrespective of utilization.
Cost Savings: High throughput workloads may lead to cost savings compared to token-based consumption models.
3- KV Caching: Enhancing Efficiency in Language Models
Key-Value (KV) caching is a technique employed in generative transformer models, such as language models (LLMs), to optimize the inference process. Key aspects of KV caching include:
Reduction of Computational Cost: Minimizes the need to recompute key and value tensors for past tokens during each generation step.
Memory-Compute Trade-off: Tensors are stored (cached) in GPU memory, balancing memory usage and compute efficiency.
4- Crafting KV Cache-Friendly Prompts:
To optimize your prompts for KV caching, consider the following strategies:
Position Dynamic Elements Wisely: Place dynamic elements, such as grounding data, date & time, or chat history, toward the end of your prompt.
Maintain Order for Static Elements: Keep static elements like safety instructions, examples, and tool/function definitions at the beginning and in a consistent order.
Dedicate Your PTU Deployment: Dedicating your deployment to few use cases can further improve cache hit rates, as the requests will be more uniform.
5- A Case Study with GPT4-T-0409:
The following experiments focused on the impact of the cacheable/fixed percentage of the prompt on system performance, specifically average time-to-first-token and throughput. The results showed a clear trend: as the fixed/cacheable part of the prompt increased, the average latency decreased and the request capacity increased.
General Settings:
Model: GPT4-T-0409
Region: UK South
PTU: 100
Load test duration: 5 min
Experiment 1:
Input token size: 10245
Output token size: 192
Cacheable % of the prompt
1%
25%
50%
75%
Throughput (request/min)
7
9
12.5
20
Time to first token (sec)
2.4
2.0
1.77
1.3
Analysis:
Throughput Improvement: As the cacheable percentage of the prompt increased from 1% to 75%, throughput saw a significant increase from 7 requests per minute to 20 requests per minute. This translates to nearly a threefold improvement, highlighting the efficiency gain from caching.
Latency Reduction: The time to the first token decreased from 2.4 seconds to 1.3 seconds as the cacheable percentage increased. This reduction in latency indicates faster initial response times, which is crucial for user experience.
Experiment 2:
Input token size: 5000
Output token size: 100
Cacheable % of the prompt
1%
25%
50%
75%
Throughput (request/min)
17
22
32
55
Time to first token (sec)
1.31
1.25
1.16
0.9
Analysis:
Throughput Improvement: When the cacheable percentage of the prompt increased from 1% to 75%, throughput saw an impressive rise from 17 requests per minute to 55 requests per minute. This more than threefold increase demonstrates the substantial impact of cache-friendly prompts on system performance.
Latency Reduction: The time to the first token improved from 1.31 seconds to 0.9 seconds with higher cacheable percentages. This faster response time is beneficial for applications requiring real-time or near-real-time interactions.
* The results may vary based on the model type, deployment region, and use case.
Summary of the results:
In both experiments, a longer cacheable part of the prompt resulted in significant boosts in throughput and reductions in latency. The improvements were more pronounced in Experiment 2, likely due to the smaller input token sizes.
Throughput: Across both experiments, a higher cacheable percentage of the prompt resulted in substantial increases in throughput. In Experiment 1, throughput increased by almost 186%, and in Experiment 2, it increased by approximately 224% from the lowest to the highest cacheable percentage.
Latency: The time to the first token decreased consistently as the cacheable percentage of the prompt increased. This reduction in latency enhances the user experience by providing quicker initial responses.
These results underscore the importance of optimizing prompts to be cache-friendly, thereby maximizing the performance of the system in terms of both throughput and latency. By leveraging caching strategies, systems can handle more requests per minute and provide faster responses, ultimately leading to a more efficient and scalable AI deployment.
6- Conclusion
Provisioned Throughput Units (PTUs) in Azure offer significant advantages in terms of performance, capacity, and cost savings. By leveraging KV caching and creating cache-friendly prompts, you can further enhance the efficiency of your AI workloads. Optimizing prompt structure not only maximizes the benefits of PTUs but also ensures more effective and resource-efficient model processing.
7- Acknowledgments
A special thanks to Michael Tremeer for his invaluable review and feedback on this blog post. Your insights have greatly enhanced the quality of this work.
8- References
Transformers KV Caching Explained | by João Lages | Medium
Techniques for KV Cache Optimization in Large Language Models (omrimallis.com)
Microsoft Tech Community – Latest Blogs –Read More
VBA-Delete all connection of multiple files in a folder and save to new files.
Hi all,
Could you please show me the VBA code which I can remove all data connections in multiple files in a folder at once and save them to new files ?
Really appreciate your help !
Best regards,
VT
Hi all, Could you please show me the VBA code which I can remove all data connections in multiple files in a folder at once and save them to new files ? Really appreciate your help ! Best regards,VT Read More
Phi-3 Vision – Catalyzing Multimodal Innovation
Co-authors: Priya Kedia, Michael Tremeer
Contributors: Ranjani Mani
Phi-3 Vision, a lightweight and state-of-the-art open multimodal model, is a significant advancement in Microsoft’s AI offerings. Developed with a focus on producing a high-quality, reasoning focused model, Phi-3 Vision utilizes synthetic data and curated publicly available web data to ensure its robustness and versatility. At only 4.2 billion parameters, it strikes an impressive balance between performance and efficiency, making it an attractive option for a wide range of applications.
As the first multimodal model in the Phi-3 family, Phi-3 Vision extends beyond the capabilities of its predecessors – Phi-3-mini, Phi-3-small, and Phi-3-medium – by seamlessly blending language and visual input. It boasts a context length of 128K tokens, allowing it to support complex and nuanced interactions. Designed with the intention to run on devices, Phi-3 Vision provides the benefits of offline operation, cost-effectiveness, and user privacy.
Phi-3 Vision has demonstrated versatility across various use cases, including Optical Character Recognition (OCR), Image Captioning, Table Parsing, and Reading Comprehension on Scanned Documents, among others. Its ability to provide high-quality reasoning with both visual and text input capabilities will drive innovation and lead to the development of new applications that are both transformative and sustainable. As an example, here is a quick demo showcasing how car footage can be analyzed to assess vehicle damages on an edge device, giving instant feedback to end user. When paired together with a larger LLM like GPT-4o, Phi-3 can form part of hybrid workflow that combines the efficiencies of Phi-3 for simpler tasks with the power of GPT-4o for more challenges tasks, unlocking the best of both worlds in a multi-step pipeline.
Market Trends
The landscape of artificial intelligence (AI) is in a state of rapid evolution, and within this space,
Microsoft’s Phi-3-Vision emerges as a noteworthy trendsetter. Phi-3-Vision, a member of Microsoft’s broader Phi-3 family, represents a significant leap in multimodal AI capabilities, blending language and vision processing.
The Rise of Multimodal AI Models
Multimodal AI models, such as the Phi-3-Vision, are increasingly gaining attention due to their ability to interpret and analyze both textual and visual data. This dual capability not only enhances user interaction with digital content but also opens up new avenues for data analysis and accessibility. As businesses and consumers alike demand more intuitive and capable AI solutions, the prominence of multimodal models is expected to grow.
Open Source as a Catalyst for Innovation
Phi-3-Vision’s open-source nature stands out as a key trend in the AI market. By allowing developers to access and build upon the model freely, Microsoft is fostering a community-driven ecosystem where innovation can thrive. This approach is likely to inspire other AI developers and companies to adopt and build upon the model, potentially leading to a surge in collaborative AI advancements.
Efficiency and Edge Computing
Another significant trend is the shift towards more efficient AI models that can operate on devices with limited computational power, such as smartphones and edge devices. Phi-3-Vision’s compact yet powerful architecture exemplifies this trend, which is driven by the need for cost-effective and less compute-intensive AI services. As a result, the market is witnessing a growing interest in AI models that are optimized for on-device, edge, and offline inference scenarios.
AI Accessibility and Democratization
The Phi-3 project’s goal to democratize AI through smaller, efficient models aligns with a broader market trend towards making AI more accessible to everyday users and developers. By making the model available on Azure AI Studio, Azure AI model catalog as well as on hugging face, Microsoft has simplified the adoption and integration of AI capabilities into various applications.
Future Integration in Various Industries
Phi-3-Vision’s adaptability and performance indicate a trend towards integrating advanced AI models into a wide array of industries. From document digitization to advanced automation solutions, Phi-3-Vision and similar models are set to transform various sectors by enhancing productivity and reducing operational costs.
Competitive Landscape
Despite its relatively compact size, Phi-3-Vision demonstrates impressive performance that is on par with much larger models, and it is one of the smallest LLMs with multimodal capabilities. This efficiency makes it particularly suitable for deployment on devices with limited computational resources, such as smartphones. In addition, the optimized versions of the model in ONNX format ensure accelerated inference on both CPU and GPU across different platforms, including server, desktop, and mobile environments.
Model Architecture and Capabilities
Phi-3 Vision is based on the Transformer model architecture, which has demonstrated remarkable success in various NLP tasks. It contains an image encoder, connector, projector, and Phi-3 Mini language model. The model’s ability to support up to 128K context length in tokens with just 4.2 billion parameters allows for extensive multimodal reasoning, making it adept at understanding and generating content from complex visual inputs like charts, graphs, and tables. Its integration into the development version (4.40.2) of the industry-standard transformers python library further simplifies its adoption in AI-driven applications.
Training Data and Quality
One of the factors that differentiates Phi-3 Vision is its training data. Unlike many other models that rely solely on human-generated data (such as from the web and published books etc.), the training datasets used to train the Phi-3 family of models are created using advanced synthetic data generation techniques, along with highly curated public web data. This approach aims to maximize the quality of the training data with a specific focus on helping the model to develop advanced reasoning skills and the ability to solve problems. This training dataset contributes to the model’s robustness and versatility, enabling it to perform well beyond expectations in various visual reasoning tasks. It has demonstrated superior performance in a range of multimodal benchmarks, outperforming competitors such as Claude 3 Haiku and coming close to the capabilities of more much larger models like OpenAI’s GPT-4V.
Target Use Cases and Applicability
In the broader AI industry, there is a strong trend of replacing larger models like GPT-4o with more efficient models like Phi-3 as AI builders seek to optimize their GenAI use-cases. A common pattern is to launch a use case with a powerful LLM like GPT-4o, and once the solution is in production, look to incorporate a more efficient SLM like Phi-3 for some of the less complicated and more narrow parts of the problem. This also means that the initial batch of production data that is generated by GPT-4o can be used to fine-tune the Phi-3 model, offering comparable accuracy of the large model at a fraction of the cost. This approach has been documented as a reliable and effective technique for reducing the costs of LLM-powered solutions while maintaining similar performance.
Given this trend, Phi-3 offers a potential to be leveraged for many use cases involving memory/compute constrained environments, latency bound scenarios, general image understanding, OCR, chart and table understanding etc.
Document and Image Analysis for KYC
Use Case: Combining text extraction and image classification to streamline the Know Your Customer (KYC) process. This helps in verifying customer identity and ensuring compliance with legal and regulatory standards in sectors like banking and financial services. Example: Automating the verification of identity documents such as passports and driving licenses by extracting text and checking the validity of images to expedite the KYC process.
Enhanced Customer Support and Product Returns
Use Case: Using text and image analysis to enhance customer support operations, including the management of product returns. This approach helps in quickly identifying issues through customer descriptions and photos of returned items, thereby improving customer satisfaction and operational efficiency. Example: Automatically processing customer complaints that include photos of defective products, enabling rapid resolution through efficient handling of returns or exchanges.
Content Moderation for Social Media
Use Case: Integrating text and image analysis to identify and moderate inappropriate content on social media platforms. This helps in maintaining community standards and ensuring a safe environment for users. Example: Automatically detecting and removing posts with offensive language and harmful images, ensuring compliance with community guidelines and promoting a positive user experience.
Video Footage Analysis for Auto and Home Insurance
Use Case: Analyzing video footage for assessing damages and verifying claims in auto and home insurance sectors. This capability allows for accurate evaluation of incidents and helps in processing claims more efficiently. Example: Processing video footage of a car accident to identify the cause and extent of damage, aiding in quick and accurate claim settlements. Similarly, evaluating home damage videos for insurance claim assessments.
Visual Content Analysis for Educational Tools
Use Case: Utilizing text and image analysis to develop interactive and adaptive educational tools. This can enhance learning by providing customized content and feedback based on both text and visual inputs from students. Example: Creating adaptive learning platforms that analyze students’ written responses and hand-drawn diagrams to offer personalized feedback and additional resources.
With the trend towards decentralized computing, users of edge devices such as smartphones, tablets, and IoT devices require lightweight AI models that can operate with limited computing resources. Phi-3 Vision’s ability to run efficiently on smaller devices makes it attractive to this demographic. By leveraging ONNX Runtime Mobile and Web, Microsoft is working to enable Phi-3 Vision on a broad spectrum of devices, from smartphones to wearables. This has led to an interest in Phi-3 vision models from a wide demographic of customers.
Partnerships and Collaborations
Partnerships with industry players, as seen with DDI Labs’ integration of Phi-3 Vision, can lead to transformative applications in areas such as video analytics and automation. Its potential to improve operations, such as in dock automation, demonstrates the practical benefits of adopting such advanced AI tools that address real-world challenges.
Taking a Deep Dive into Code
Getting Started
With basics taken care of, what’s next?
Deploy a quantized version of the model at the edge
https://onnxruntime.ai/blogs/accelerating-phi-3
Finetune the model for your domain specific use case
Phi-3CookBook/md/04.Fine-tuning/FineTuning_Qlora.md at main · microsoft/Phi-3CookBook (github.com)
Considerations for the future for Phi-3 Vision Team
Ethical Considerations and Bias Mitigation
Despite safety post-training, the potential for unfair or biased outcomes remains a concern due to societal biases reflected in training data. Ongoing efforts to mitigate these risks are critical to maintaining the integrity and social acceptability of AI technologies like Phi-3 Vision.
Computational and Energy Efficiency
As AI models grow in complexity and capability, ensuring computational and energy efficiency becomes increasingly challenging. Striking a balance between performance and resource consumption is essential for sustainable AI development, especially for models intended for widespread use across various devices.
Security and Privacy
With the proliferation of AI in personal and professional domains, security and privacy concerns must be addressed. Protecting user data and preventing unauthorized access or misuse of AI technologies are paramount for maintaining user trust and complying with regulatory requirements.
Final Thoughts
In conclusion, the Phi-3 family, spearheaded by Phi-3-vision, exemplifies the progress and potential of AI. While there are challenges to be addressed, the opportunities these models present are vast and ripe for exploration. As AI continues to evolve, models like Phi-3 Vision will undoubtedly be instrumental in shaping innovative solutions that could redefine the way we interact with technology and process information in our digital world.
References
Microsoft Official and Tech Community
https://azure.microsoft.com/en-us/blog/new-models-added-to-the-phi-3-family-available-on-microsoft-azure/
https://mspoweruser.com/microsoft-announces-phi-3-vision-a-new-multimodal-slm-for-on-device-ai-scenarios/
https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/affordable-innovation-unveiling-the-pricing-of-phi-3-slms-on/ba-p/4156495
https://techcommunity.microsoft.com/t5/educator-developer-blog/using-phi-3-amp-c-with-onnx-for-textand-vision-samples/ba-p/4161020
https://techcommunity.microsoft.com/t5/microsoft-developer-community/getting-started-generative-ai-with-phi-3-mini-running-phi-3-mini/ba-p/4147246
GitHub – microsoft/Phi-3CookBook: Samples for getting a quick understanding and exploration of Phi-3 models
Technical and AI-focused Publications
https://huggingface.co/blog/Emma-N/enjoy-the-power-of-phi-3-with-onnx-runtime
https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
https://huggingface.co/microsoft/Phi3 Vision
https://huggingface.co/microsoft/Phi3 Vision-onnx-cpu
https://huggingface.co/microsoft/Phi3 Vision-onnx-cuda
https://venturebeat.com/ai/microsoft-phi-3-generally-available-phi-3-vision-preview/
https://www.analyticsvidhya.com/blog/2024/05/microsoft-phi3/
https://towardsdatascience.com/6-real-world-uses-of-microsofts-newest-phi-3-vision-language-model-8ebbf
https://www.digit.in/news/general/microsoft-announced-phi-3-vision-an-ai-model-for-phones-that-can-analyse-pictures.html
Microsoft Tech Community – Latest Blogs –Read More
How do you burn Windows 11 ISO to USB on Mac?
I recently ran into a problem where my Windows PC was damaged, and now I’m left with only my Mac to work on. I need to create a bootable USB drive with Windows 11 using my Mac, as I have the ISO file but I’m unsure of the correct tools and procedures to burn ISO to USB on Mac. Given that Macs handle file systems differently, I’m looking for advice on how to properly format the USB and burn ISO file to USB on Mac effectively. If anyone has experience with this or knows any reliable methods, could you please share your insights?
I recently ran into a problem where my Windows PC was damaged, and now I’m left with only my Mac to work on. I need to create a bootable USB drive with Windows 11 using my Mac, as I have the ISO file but I’m unsure of the correct tools and procedures to burn ISO to USB on Mac. Given that Macs handle file systems differently, I’m looking for advice on how to properly format the USB and burn ISO file to USB on Mac effectively. If anyone has experience with this or knows any reliable methods, could you please share your insights? Read More
How to build this “custom card on hover” introduced in this article?
Article: https://learn.microsoft.com/en-us/sharepoint/dev/declarative-customization/formatting-advanced
I want to show history status like this article showed in “custom card on hover” section. But what was the setting? For example, is that status column lookup type? Looking up another list? If so, what is the type of that column being looked up? What are the value in that column to have such historical process status?
Please help, thanks!
Article: https://learn.microsoft.com/en-us/sharepoint/dev/declarative-customization/formatting-advanced I want to show history status like this article showed in “custom card on hover” section. But what was the setting? For example, is that status column lookup type? Looking up another list? If so, what is the type of that column being looked up? What are the value in that column to have such historical process status? Please help, thanks! Read More
Exchange Online now blocking whitelisted domain
We have an external linux server that has been been able to email logs and script output files to admin email addresses on Exchange 365 without problems for many years. Note: The domain name for the server has been whitelisted in Exchange Admin Center > Mail Flow > Rules.
Unfortunately, over the last few days we are now receiving NDRs for some of the emails originating from the server. For example:
This is the mail system at host ip-x.x.x.x.us-east-2.compute.internal.
I’m sorry to have to inform you that your message could not be delivered to one or more recipients. It’s attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can delete your own text from the attached returned message.
The mail system
<email address removed for privacy reasons>: host
companyname-com.mail.protection.outlook.com[z.z.z.z] said: 550
5.7.1 Unfortunately, messages from [n.n.n.n] weren’t sent. For more
information, please go to http://go.microsoft.com/fwlink/?LinkID=526655
AS(900) [DM6PR18MB3555.namprd18.prod.outlook.com 2024-06-17T23:44:01.329Z
08DC8F1D3280898A] [PH8PR15CA0013.namprd15.prod.outlook.com
2024-06-17T23:44:01.398Z 08DC8E6A7387EBAE]
[CY4PEPF0000E9D6.namprd05.prod.outlook.com 2024-06-17T23:44:01.398Z
08DC8E2D0D0B4FEE] (in reply to end of DATA command)
The article suggested by the NDR report (http://go.microsoft.com/fwlink/?LinkID=526655) recommends using the Microsoft delist portal to fix the problem. However, when I use the portal to attempt to delist the server’s IP address, I don’t get any confirmation email. Also, the NDR email doesn’t exactly match the conditions noted in the video found in the article – there isn’t any message in the NDR stating “Access denied – banned sending IP.”
Has anything changed in the Exchange Online environment recently that could cause this problem?
Thanks,
Don
PS Here is the log entry from the mail log on the linux server:
Jun 16 04:05:02 ip-172-31-1-188 postfix/smtp[417020]: 2F1A7103ECA3: to=<email address removed for privacy reasons>, orig_to=<root>, relay=companyname-com.mail.protection.outlook.com[z.z.z.z]:25, delay=2.1, delays=0.01/0/0.32/1.8, dsn=5.7.1, status=bounced (host companyname-com.mail.protection.outlook.com[z.z.z.z] said: 550 5.7.1 Unfortunately, messages from [n.n.n.n] weren’t sent. For more information, please go to http://go.microsoft.com/fwlink/?LinkID=526655 AS(900) [CO1PR18MB4810.namprd18.prod.outlook.com 2024-06-16T08:05:02.211Z 08DC8D95079D7987] [CH0PR03CA0236.namprd03.prod.outlook.com 2024-06-16T08:05:02.272Z 08DC8C31791B17CC] [DS3PEPF0000C37B.namprd04.prod.outlook.com 2024-06-16T08:05:02.264Z 08DC881BD5FEF961] (in reply to end of DATA command))
We have an external linux server that has been been able to email logs and script output files to admin email addresses on Exchange 365 without problems for many years. Note: The domain name for the server has been whitelisted in Exchange Admin Center > Mail Flow > Rules. Unfortunately, over the last few days we are now receiving NDRs for some of the emails originating from the server. For example: This is the mail system at host ip-x.x.x.x.us-east-2.compute.internal.
I’m sorry to have to inform you that your message could not be delivered to one or more recipients. It’s attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can delete your own text from the attached returned message.
The mail system
<email address removed for privacy reasons>: host
companyname-com.mail.protection.outlook.com[z.z.z.z] said: 550
5.7.1 Unfortunately, messages from [n.n.n.n] weren’t sent. For more
information, please go to http://go.microsoft.com/fwlink/?LinkID=526655
AS(900) [DM6PR18MB3555.namprd18.prod.outlook.com 2024-06-17T23:44:01.329Z
08DC8F1D3280898A] [PH8PR15CA0013.namprd15.prod.outlook.com
2024-06-17T23:44:01.398Z 08DC8E6A7387EBAE]
[CY4PEPF0000E9D6.namprd05.prod.outlook.com 2024-06-17T23:44:01.398Z
08DC8E2D0D0B4FEE] (in reply to end of DATA command) The article suggested by the NDR report (http://go.microsoft.com/fwlink/?LinkID=526655) recommends using the Microsoft delist portal to fix the problem. However, when I use the portal to attempt to delist the server’s IP address, I don’t get any confirmation email. Also, the NDR email doesn’t exactly match the conditions noted in the video found in the article – there isn’t any message in the NDR stating “Access denied – banned sending IP.” Has anything changed in the Exchange Online environment recently that could cause this problem? Thanks,Don PS Here is the log entry from the mail log on the linux server: Jun 16 04:05:02 ip-172-31-1-188 postfix/smtp[417020]: 2F1A7103ECA3: to=<email address removed for privacy reasons>, orig_to=<root>, relay=companyname-com.mail.protection.outlook.com[z.z.z.z]:25, delay=2.1, delays=0.01/0/0.32/1.8, dsn=5.7.1, status=bounced (host companyname-com.mail.protection.outlook.com[z.z.z.z] said: 550 5.7.1 Unfortunately, messages from [n.n.n.n] weren’t sent. For more information, please go to http://go.microsoft.com/fwlink/?LinkID=526655 AS(900) [CO1PR18MB4810.namprd18.prod.outlook.com 2024-06-16T08:05:02.211Z 08DC8D95079D7987] [CH0PR03CA0236.namprd03.prod.outlook.com 2024-06-16T08:05:02.272Z 08DC8C31791B17CC] [DS3PEPF0000C37B.namprd04.prod.outlook.com 2024-06-16T08:05:02.264Z 08DC881BD5FEF961] (in reply to end of DATA command)) Read More