Tag Archives: microsoft
Generate sets of five numbers base on the given numbers from 5 different columns
I color them just to show that the numbers stay in the same column, and still are in ascending order.
Columns H to L or just N are a few examples of combinations I did manually to show the results I am looking for.
Columns A to E, the given numbers, should generate all the possible combinations as you see in H to L or just N.
I do not want to mix the columns like in P1
Let me know if you have any questions.
I attach the excel file
Thank you for your help
I color them just to show that the numbers stay in the same column, and still are in ascending order.Columns H to L or just N are a few examples of combinations I did manually to show the results I am looking for. Columns A to E, the given numbers, should generate all the possible combinations as you see in H to L or just N. I do not want to mix the columns like in P1 Let me know if you have any questions. I attach the excel file Thank you for your help Read More
Location Customization
We are interested to switch to Bookings to get away from a home grown appointment solution. We need to set up appointments with end users for special account provisioning. The appointments would be with one of four different system admins at the system admins office, but I’ve configured it so the end user cannot choose a specific system admin. The appointments work, but they don’t show a location and we’d like for the location of the appointment to be at the system admins office. Is there a way to do this?
We are interested to switch to Bookings to get away from a home grown appointment solution. We need to set up appointments with end users for special account provisioning. The appointments would be with one of four different system admins at the system admins office, but I’ve configured it so the end user cannot choose a specific system admin. The appointments work, but they don’t show a location and we’d like for the location of the appointment to be at the system admins office. Is there a way to do this? Read More
Outlook issues on Server 2022 RDP clients
Hi, wondering if anyone can help with this:
Scenario:
Cloud Server running Server 2022 Standard 21H2 with 3 RDP users
O365 installed and users are running Outlook V2403 in 32 bit (needed for Myob AccountRight Enterprise)
When admin tries to send an email via MYOB, it all works fine. The Trust Center in Outlook shows standard Programmatic Access selection and NOT greyed out
When a user tries to send an email via MYOB, it pops up a message saying “A program is trying to send an email message on your behalf” and it then takes about 10 seconds before the Allow button becomes available. We believe this is due to Programmatic Access settings in the Outlook Trust Center but these are greyed out for users.
We have looked for a Group Policy template that controls this and also for the relevant registry settings with no luck. Anyone have any suggestions please? We’d like the popup not to appear at all but if the Allow button became active immediately, this would also be acceptable.
Hi, wondering if anyone can help with this:Scenario:Cloud Server running Server 2022 Standard 21H2 with 3 RDP usersO365 installed and users are running Outlook V2403 in 32 bit (needed for Myob AccountRight Enterprise) When admin tries to send an email via MYOB, it all works fine. The Trust Center in Outlook shows standard Programmatic Access selection and NOT greyed out When a user tries to send an email via MYOB, it pops up a message saying “A program is trying to send an email message on your behalf” and it then takes about 10 seconds before the Allow button becomes available. We believe this is due to Programmatic Access settings in the Outlook Trust Center but these are greyed out for users. We have looked for a Group Policy template that controls this and also for the relevant registry settings with no luck. Anyone have any suggestions please? We’d like the popup not to appear at all but if the Allow button became active immediately, this would also be acceptable. Read More
Public Preview: Hibernation Support extended to GPU and more General Purpose VM sizes.
Azure is excited to announce that hibernation support has been extended to the following General Purpose VM sizes up to 64GB RAM-:
In addition, as previously announced in March, select GPU VM sizes now have hibernation support and are available for public preview in all regions.
GPU sizes up to 112GB RAM in the following VM series now support hibernation in Public Preview-:
This expanded support provides even more opportunities for optimizing compute costs and effectively managing resources on Azure.
Microsoft Tech Community – Latest Blogs –Read More
[Storage Explorer] How to install Storage Explorer on Ubuntu.
[Storage Explorer] How to install Storage Explorer on Ubuntu.
“Microsoft Azure Storage Explorer is a standalone app that makes it easy to work with Azure Storage data on Windows, macOS, and Linux.”
In this document, you will learn how to install Storage Explorer on Ubuntu 20.04 LTS as you are also able to use the GUI on Linux environment as well. The Storage Explorer is compatible with Red Hat Enterprise as well as SUSE Linx Enterprise.
1. What is Storage Explorer?
Storage Explorer is a GUI tool that enables you to manage your Storage Account from any OS environment. It is compatible with MacOS, Ubuntu, Linux (Red Hat and SUSE) and lastly Windows. This tool is a standalone app that will make it easier to navigate and manage your Storage Account.
2. How to set up Storage Explorer in Ubuntu?
Prerequisites:
Before installing Storage Explorer, make sure your Ubuntu is a gcc version. If you are using Azure VM, make sure you install gcc or GUI in your environment. If not, you will encounter an error when starting your Storage Explorer. Since most Linux VMs in Azure don’t have a desktop environment installed by default.
[ERROR] Missing X server or $DISPLAY
[ERROR] The platform failed to initialize. Exiting.
Segmentation fault (core dumped)
[T]linuxvm::root::/root #
For more information on Azure VM, please visit the following link.
Step 1. You need to install snapd first in your environment.
The Storage Explorer snap will install all the dependencies and the updates. Therefore, it is a must to install snapd.
$root : sudo apt-get install snapd
Step2. Once snapd is installed, let’s go ahead and install storage explorer!
$root: sudo snap install storage-explorer
Once it is completed, it will look like this.
Step3. Storage Explorer requires the use of a password manager. Thus, you must execute the below command.
$root : snap connect storage-explorer:password-manager-service :password-manager-service
If you do not run this command, you will not be able to launch the Storage Explorer. You will face the error below.
<ERRO> Error checking minimum linux requirement [Error: An AppArmor policy prevents this sender from sending this message to this recipient; type=”method_call”, sender=”:1.137″ (uid=1000 pid=34087 comm=”/snap/storage-explorer/60/StorageExplorerExe –no-” label=”snap.storage-explorer.storage-explorer (enforce)”) interface=”org.freedesktop.Secret.Service” member=”OpenSession” error name=”(unset)” requested_reply=”0″ destination=”:1.45″ (uid=1000 pid=31977 comm=”/usr/bin/gnome-keyring-daemon –start –components” label=”unconfined”)]
If you type the command, you will see a pop up, where you must authenticate.
Type in your root password.
Step 4. Once that’s ready, start your storage-explorer.
Then you will see a pop up to create a new key. Set up your key.
Once you set your password, you will see the Storage Explorer running on your environment. Please read through the agreement and accept it.
III. What are the limitations?
Storage Explorer is navigating GUI, therefore, if you are expecting to use this tool as a command line tool, this is not your choice to satisfy your needs. This is reminder that this tool is used to upload, download, and manage Azure Storage blobs, files, queues, and tables in your Storage Account.
IV. Conclusion
Hope this article has helped you install your Storage Explorer on Ubuntu. Make sure you have your gcc version enabled in your OS as well. If you are having other issues while using the Storage Explorer, here is the troubleshooting guide you can refer to. If you have questions or need help, create a support request, or ask Azure community support.
Microsoft Tech Community – Latest Blogs –Read More
SharePoint CSOM access vs Microsoft Graph API
We are replacing old Microsoft.SharePoint.Client (CSOM) with Microsoft Graph API because the CSOM library is deprecated and Microsoft would prefer we move to the Graph API.
However. Large queries that work with the old library return with “too many resources” errors.
{ “error”: { “code”: “notSupported”, “message”: “The request is unprocessable because it uses too many resources”,
The querystring covers three hours and only 21 records.
/items?$filter=fields/Created ge ‘2024-04-17T10:00:01Z’ and fields/Created le ‘2024-04-17T12:59:32Z’ and (fields/CustID eq ‘FUL-015’)&expand=fields&top=133
Online advice recommends turning this off or on. I’ve run it both ways and get the same result.
Prefer: HonorNonIndexedQueriesWarningMayFailRandomly
The Graph as it relates to SharePoint seems to not be ready for primetime.
My questions are:
Is this a known issue?
Are there alternative libraries that can handle the load?
Will the SharePoint API be more robust?
Thank you.
We are replacing old Microsoft.SharePoint.Client (CSOM) with Microsoft Graph API because the CSOM library is deprecated and Microsoft would prefer we move to the Graph API. However. Large queries that work with the old library return with “too many resources” errors. { “error”: { “code”: “notSupported”, “message”: “The request is unprocessable because it uses too many resources”, The querystring covers three hours and only 21 records./items?$filter=fields/Created ge ‘2024-04-17T10:00:01Z’ and fields/Created le ‘2024-04-17T12:59:32Z’ and (fields/CustID eq ‘FUL-015’)&expand=fields&top=133 Online advice recommends turning this off or on. I’ve run it both ways and get the same result.Prefer: HonorNonIndexedQueriesWarningMayFailRandomly The Graph as it relates to SharePoint seems to not be ready for primetime. My questions are:Is this a known issue?Are there alternative libraries that can handle the load?Will the SharePoint API be more robust? Thank you. Read More
What’s new in Fundraising and Engagement | April 2024
Microsoft Tech for Social Impact is proud to announce the April 2024 release of Fundraising and Engagement. This release brings significant enhancements mainly to nonprofit gift processors and includes valuable enhancement Fundraising and Engagement Azure services.
New features
The April release of Fundraising and Engagement includes the following new capabilities:
New Stripe API (payment intents) integration: This update introduces Stripe client-side tokenization to improve the payment experience for users who prefer Stripe as a payment processor in Fundraising and Engagement. We highly recommend using the new Stripe API when creating a payment processor associated to a configuration profile.
Learn more
What’s new in Fundraising and Engagement April 16, 2024 – Microsoft Cloud for Nonprofit | Microsoft Learn
Perform post-deployment configuration tasks for Fundraising and Engagement – Microsoft Cloud for Nonprofit | Microsoft Learn
Configure Fundraising and Engagement – Microsoft Cloud for Nonprofit | Microsoft Learn
Microsoft Tech for Social Impact is proud to announce the April 2024 release of Fundraising and Engagement. This release brings significant enhancements mainly to nonprofit gift processors and includes valuable enhancement Fundraising and Engagement Azure services.
New features
The April release of Fundraising and Engagement includes the following new capabilities:
New Stripe API (payment intents) integration: This update introduces Stripe client-side tokenization to improve the payment experience for users who prefer Stripe as a payment processor in Fundraising and Engagement. We highly recommend using the new Stripe API when creating a payment processor associated to a configuration profile.
Learn more
What’s new in Fundraising and Engagement April 16, 2024 – Microsoft Cloud for Nonprofit | Microsoft Learn
Perform post-deployment configuration tasks for Fundraising and Engagement – Microsoft Cloud for Nonprofit | Microsoft Learn
Configure Fundraising and Engagement – Microsoft Cloud for Nonprofit | Microsoft Learn Read More
What to Do When QuickBooks Migration Failed Unexpectedly After QB Update?
QuickBooks is an indispensable tool for businesses, streamlining accounting processes and financial management. However, migrating data within QuickBooks can sometimes be challenging, especially after a software update. Migration failures can lead to data loss, discrepancies, and disruptions in operations. In this article, we’ll explore common reasons why QuickBooks migration fails after an update and provide detailed steps to troubleshoot and resolve these issues.
Reasons for QuickBooks Migration Failure After Update:
Software Compatibility Issues: QuickBooks updates may introduce changes in the software’s structure or data format, causing compatibility issues during migration. This can lead to errors or incomplete data transfer.
Corrupted Company File: If the company file in QuickBooks is corrupted, migration attempts may fail. Corrupted files can result from various factors, including improper shutdowns, power outages, or software glitches.
Incomplete Update Installation: Sometimes, updates may not install correctly, leaving the software in an inconsistent state. Incomplete installations can affect the migration process, resulting in errors or unexpected behavior.
Network Connectivity Problems: Poor network connectivity or interruptions during data migration can disrupt the process, causing failures or partial transfers of data.
Insufficient System Resources: QuickBooks migration requires adequate system resources, including disk space, memory, and processing power. Insufficient resources can hinder the migration process and lead to failures.
How to Fix QuickBooks Migration Failures:
Verify Software Compatibility: Before initiating a migration process, ensure that the QuickBooks software version is compatible with the update. Check for any known compatibility issues or required updates from Intuit’s official support resources.
Repair Corrupted Company File: If the company file is corrupted, use QuickBooks’ built-in file repair utility to attempt repairs. Navigate to the File menu, select Utilities, and then click on Rebuild Data. Follow the prompts to repair the company file and retry the migration process.
Complete Update Installation: Ensure that the QuickBooks update is installed correctly by verifying the installation logs or running the update process again. If any errors occur during installation, address them accordingly before attempting migration.
Stable Network Connection: Perform QuickBooks migration during off-peak hours to minimize network congestion and ensure a stable connection. If using a wireless network, consider switching to a wired connection to prevent signal disruptions.
Allocate Sufficient Resources: Check the system requirements for QuickBooks migration and ensure that the workstation or server meets the recommended specifications. Close unnecessary applications and processes to free up resources before initiating the migration process.
Backup Data Before Migration: Prior to migration, create a backup of the QuickBooks company file and relevant data. This ensures that in case of migration failures or data loss, you can restore from a known good state without significant repercussions.
Utilize QuickBooks Diagnostic Tools: QuickBooks provides diagnostic tools to troubleshoot common issues, such as the QuickBooks File Doctor and QuickBooks Install Diagnostic Tool. Run these tools to identify and resolve any underlying problems affecting the migration process.
Seek Professional Assistance: If troubleshooting steps fail to resolve the migration issues, consider seeking assistance from QuickBooks experts or Intuit’s support team. They can provide advanced troubleshooting steps or guidance tailored to your specific situation.
Conclusion:
QuickBooks Migration Failed Unexpectedly after update can disrupt business operations and compromise data integrity. By understanding the reasons behind these failures and following the recommended troubleshooting steps, you can mitigate risks and ensure a smooth migration process. Remember to backup data regularly, stay informed about software updates, and leverage available resources for assistance when needed. With careful planning and proactive measures, you can effectively manage QuickBooks migration challenges and maintain seamless financial workflows.
QuickBooks is an indispensable tool for businesses, streamlining accounting processes and financial management. However, migrating data within QuickBooks can sometimes be challenging, especially after a software update. Migration failures can lead to data loss, discrepancies, and disruptions in operations. In this article, we’ll explore common reasons why QuickBooks migration fails after an update and provide detailed steps to troubleshoot and resolve these issues. Reasons for QuickBooks Migration Failure After Update: Software Compatibility Issues: QuickBooks updates may introduce changes in the software’s structure or data format, causing compatibility issues during migration. This can lead to errors or incomplete data transfer.Corrupted Company File: If the company file in QuickBooks is corrupted, migration attempts may fail. Corrupted files can result from various factors, including improper shutdowns, power outages, or software glitches.Incomplete Update Installation: Sometimes, updates may not install correctly, leaving the software in an inconsistent state. Incomplete installations can affect the migration process, resulting in errors or unexpected behavior.Network Connectivity Problems: Poor network connectivity or interruptions during data migration can disrupt the process, causing failures or partial transfers of data.Insufficient System Resources: QuickBooks migration requires adequate system resources, including disk space, memory, and processing power. Insufficient resources can hinder the migration process and lead to failures.How to Fix QuickBooks Migration Failures: Verify Software Compatibility: Before initiating a migration process, ensure that the QuickBooks software version is compatible with the update. Check for any known compatibility issues or required updates from Intuit’s official support resources.Repair Corrupted Company File: If the company file is corrupted, use QuickBooks’ built-in file repair utility to attempt repairs. Navigate to the File menu, select Utilities, and then click on Rebuild Data. Follow the prompts to repair the company file and retry the migration process.Complete Update Installation: Ensure that the QuickBooks update is installed correctly by verifying the installation logs or running the update process again. If any errors occur during installation, address them accordingly before attempting migration.Stable Network Connection: Perform QuickBooks migration during off-peak hours to minimize network congestion and ensure a stable connection. If using a wireless network, consider switching to a wired connection to prevent signal disruptions.Allocate Sufficient Resources: Check the system requirements for QuickBooks migration and ensure that the workstation or server meets the recommended specifications. Close unnecessary applications and processes to free up resources before initiating the migration process.Backup Data Before Migration: Prior to migration, create a backup of the QuickBooks company file and relevant data. This ensures that in case of migration failures or data loss, you can restore from a known good state without significant repercussions.Utilize QuickBooks Diagnostic Tools: QuickBooks provides diagnostic tools to troubleshoot common issues, such as the QuickBooks File Doctor and QuickBooks Install Diagnostic Tool. Run these tools to identify and resolve any underlying problems affecting the migration process.Seek Professional Assistance: If troubleshooting steps fail to resolve the migration issues, consider seeking assistance from QuickBooks experts or Intuit’s support team. They can provide advanced troubleshooting steps or guidance tailored to your specific situation.Conclusion: QuickBooks Migration Failed Unexpectedly after update can disrupt business operations and compromise data integrity. By understanding the reasons behind these failures and following the recommended troubleshooting steps, you can mitigate risks and ensure a smooth migration process. Remember to backup data regularly, stay informed about software updates, and leverage available resources for assistance when needed. With careful planning and proactive measures, you can effectively manage QuickBooks migration challenges and maintain seamless financial workflows. Read More
Ink to Text Pen now available in Excel for Windows
Hi Microsoft 365 Insiders,
We’re thrilled to announce the new Ink to Text Pen feature in Excel for Windows! Automatically convert your handwriting into text for quick data entry, and use pen gestures to manipulate or delete cell content with ease. Perfect for snappy edits or when you’re on the go!
Our latest blog has the details: Ink to Text Pen now available in Excel for Windows
Thanks!
Perry Sjogren
Microsoft 365 Insider Social Media Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Hi Microsoft 365 Insiders,
We’re thrilled to announce the new Ink to Text Pen feature in Excel for Windows! Automatically convert your handwriting into text for quick data entry, and use pen gestures to manipulate or delete cell content with ease. Perfect for snappy edits or when you’re on the go!
Our latest blog has the details: Ink to Text Pen now available in Excel for Windows
Thanks!
Perry Sjogren
Microsoft 365 Insider Social Media Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Accelerate your observability journey with Azure Monitor pipeline (preview)
In the ever-evolving landscape of digital infrastructure, transparency in resource and application performance is imperative. Success hinges on visibility, and that’s true whether you’re operating on Azure, on-premise, or at the edge. As organizations scale their infrastructures and applications, the volume of observability data naturally increases. This surge can complicate the management of networking, data storage and ingestion, often forcing a trade-off between cost management and observability.
The complexity doesn’t end there. The very tools designed to ingest, process, and route this data can be both costly and complex, adding layers of operational challenges. Moreover, edge infrastructure is deployed near IoT devices for optimal data processing, high availability, and reduced latency. This adds its own set of challenges when it comes to collecting telemetry from such constrained environments.
Recognizing these challenges, our team has been focused on providing a robust, highly scalable, and secure data ingestion solution through Azure Monitor. We are thrilled to announce the preview of the Azure Monitor pipeline at edge.
What is Azure Monitor pipeline?
Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. This is particularly beneficial for cloud-based monitoring in Azure.
We are now extending our Azure Monitor pipeline capabilities from the cloud to the edge, enabling high-scale data ingestion with centralized configuration management.
What is Azure Monitor pipeline at edge?
Azure Monitor pipeline at edge is a powerful solution designed to facilitate high-scale data ingestion and routing from edge environments to Azure Monitor for observability. It leverages the robust capabilities of the vendor-agnostic tool – OpenTelemetry Collector, which is used by enterprises worldwide to manage high volumes of telemetry each month.
With the Azure Monitor pipeline at edge, organizations can tap into the same highly scalable platform with a standardized configuration and reliability. Whether dealing with petabytes of data or seeking consistent observability experience across Azure, edge, and multi-cloud, this solution empowers organizations to reliably collect telemetry and drive operational excellence.
The Azure Monitor pipeline at edge is equipped with out-of-the-box capabilities to receive telemetry from a diverse range of resources and route it to Azure Monitor. Here are some key features:
High scale data ingestion: Customers have various devices and resources at edge, emitting high volume of data. With Azure Monitor pipeline at edge, you can seamlessly scale to support ingestion of high volume of data in the cloud. Azure Monitor pipeline can be deployed on your on-premises Kubernetes cluster as an Arc Kubernetes cluster extension. This allows it to adapt to your data scaling needs by running multiple replica sets and provides you with full control to define workflows and route high-volume data to Azure Monitor.
Observing resources in isolated environments: In the manufacturing sector, resources are often located in isolated network zones without direct cloud connectivity, posing challenges for telemetry collection. With the Azure Monitor pipeline at edge, combined with Azure IoT Layered Network Management, you can facilitate a connection between Azure and Kubernetes clusters in isolated networks, deploy the Azure Monitor pipeline at edge, collect data from resources in segmented networks, and route it to Azure Monitor for comprehensive observability.
Reliable data ingestion and prevent data loss: Edge environments frequently encounter intermittent connectivity, leading to potential data loss and disrupting data continuity. The Azure Monitor pipeline at edge allows you to cache logs during periods of intermittent connectivity. When connectivity is re-established, your data is synchronized with Azure Monitor, preventing data loss.
Getting started
It’s super easy to get started! You need to deploy the Azure Monitor pipeline on a single Arc-enabled Kubernetes cluster in your environment. Once that is done, you can configure your resources to emit the telemetry to Azure Monitor pipeline at edge and ingest into Azure Monitor for observability.
Once you Arc-enable your on-prem Kubernetes cluster and the prerequisites are met, go the Extension section, select Azure Monitor pipeline extension (preview) and create the instance. Alternatively, from the search bar in the Azure portal, select Azure Monitor pipeline and then click Create.
Enter the information related to the pipeline instance.
The Dataflow tab allows you to create and edit dataflows for the pipeline instance.
Configure your resources to emit the telemetry to the Azure Monitor pipeline.
Learn more in our documentation.
Pricing
There is no additional cost to use Azure Monitor pipeline to send data to Azure Monitor. You will be only charged for data ingestion as per the current pricing.
FAQ
What telemetry can be collected using Azure Monitor pipeline? Currently, in public preview, you can collect syslogs and OTLP logs using Azure Monitor pipeline at edge. We will keep expanding the data collection capabilities based on your feedback and requirements.
How can I perform transformations on the telemetry that is collected? You can certainly transform your telemetry! Since this is an extension of Azure Monitor pipeline, you can perform the data collection transformations in the Azure Monitor pipeline at cloud.
Is this another agent for data collection? Azure Monitor pipeline at edge is engineered to function in environments where installing agents on resources is not feasible, whether due to technical limitations or warranty concerns. It enables you to get the telemetry from these resources and acts as a central forwarding component to ingest high volume data.
I have 100 Linux servers in my on-prem environment. Do I need to deploy Azure Monitor pipeline at edge on all of them? You need to deploy the Azure Monitor pipeline at edge on a single Arc-enabled Kubernetes cluster and configure it to ingest data into Azure Monitor. Once that is completed, you can configure your Linux servers to emit telemetry to the Azure Monitor pipeline at edge instance.
Microsoft Tech Community – Latest Blogs –Read More
Hannover Messe 2024: Scaling Industrial Transformation with Azure’s Adaptive Cloud Approach
As I reflect on Hannover Messe International 2024, it was amazing to see how industrial organizations are embracing this year’s show theme of “energizing a sustainable industry”. Large industry events such as these are incredibly valuable, as we get the opportunity to meet with many of the customers and partners who inform and guide our strategy in this space. This year, we were excited to share our vision for how Azure’s adaptive cloud approach provides the foundation for scaling industrial transformation efforts to the next level. Announcements include how we’re working with the ecosystem to empower customers to do more with their data, new capabilities to help customers build secure, resilient and observable edge applications, and how we’re making it simpler to manage Azure resources in a cohesive way across distributed physical operations.
The opportunity for industrial transformation with an adaptive cloud approach
Today, we’re at an inflection point where two of the most significant technology trends – – are converging to create meaningful outcomes for industrial customers. AI and advanced analytics tools provide the intelligence to optimize business processes, while the cloud offers the global footprint required to scale those outcomes organization-wide, including physical operations. Customers such as
Chevron are committed to responsibly applying AI to achieve its objectives of delivering safer and more efficient operations. And Electrolux Group is leveraging the cloud and advanced analytics to keep quality at the forefront of their global manufacturing processes.
Defining the adaptive cloud approach
To drive comprehensive organizational transformation, customers need to be able to harness data across a distributed estate that typically spans a variety of people, places, and processes. To date, however, many organizations have taken a decentralized approach to digitizing physical operations environments that has challenged their ability to successfully scale business outcomes. Today, we see the opportunity for a new approach; one that uses the cloud as a consistent operations and innovation platform to drive visibility, repeatability, and scalability across heterogeneous edge environments. This approach, referred to as adaptive cloud, brings separate teams, sites, and systems into a unified model for operations, applications, and data, so organizations can take advantage of AI across a global operational estate.
Applying the adaptive cloud approach in physical operations environments
This standardized approach to data, applications and management, is enabled by Azure Arc, which allows organizations to leverage best of breed Azure capabilities across their entire computing estate for repeatability and scale. Azure IoT Operations, currently in public preview, allows organizations to extend these benefits to their physical operations environments with a unified, enterprise-wide technology architecture and data plane that democratizes data, enables cross-team collaboration, and accelerates decision-making. With Azure IoT Operations, enabled by Azure Arc, data and operational technology professionals can cultivate insights across digital and physical operations with a contextualized edge to cloud data fabric, while developers can rapidly build and deploy intelligent applications across boundaries with a consistent set of application development, deployment and management tools and methodologies. In parallel, IT can remove complexity by centralizing management, security processes, and policies across distributed applications and infrastructure.
The importance of the ecosystem within physical operations environments
As mentioned earlier, physical operations environments have traditionally been managed in a decentralized way. The reason for this paradigm is the highly heterogenous nature of such environments, which often include assets and devices built by various manufacturers, each with their own tooling and applications. Success in this market won’t be achieved by trying to replace the unique value that these ecosystem partners bring to the table. Instead, as a platform company, Microsoft’s goal is to provide an open, common pattern that partners can utilize, together providing customers with a common foundation for their industrial applications. This common foundation provides customers with a single place to manage these highly complex environments, as well as the benefit of being able to integrate data from different solutions and sites together for enterprise-wide insights. Partners not only benefit from a customer-centric approach, but also by being able to deliver solutions faster using the flexible, standards-based reference architecture offered by Azure IoT Operations.
Announcements
Today, we have several exciting product and partner announcements that will help industrial customers embrace the transformative benefits of the adaptive cloud approach.
Enabling insights at scale with an open, interoperable foundation
At Microsoft, we are committed to empowering our customers to achieve more with their data and unlocking new insights and opportunities across the industrial ecosystem.
For customers to cultivate insights across their operational environments, they first need access to the data sitting within their industrial assets – and to be able to get that data into a format that will be usable by other applications. To assist with these efforts, Microsoft is working with the ecosystem of connectivity partners for Azure IoT Operations to modernize industrial systems and devices. These partners provide data translation and normalization services across heterogeneous environments for a seamless and secure data flow from the shop floor to the cloud. We leverage open standards and provide consistent control and management capabilities for OT and IT assets. To date, we have established integrations with connectivity partners Advantech, PTC, and Softing that are uniquely positioned in their field and enable a wide range of customers. Beyond connectivity, we are also partnering with Rockwell Automation to deliver a set of composable solutions that take advantage of the adaptive cloud approach to unlock the promise of rapid digital transformation at scale across manufacturing scenarios.
Additionally, to help drive interoperability across edge applications, edge devices, and edge orchestration software, Microsoft is also proud to participate and contribute to Margo, a new open standard initiative for interoperability at the edge of industrial automation ecosystems. Hosted by The Linux Foundation, the Margo initiative defines the mechanisms for interoperability between edge applications, edge devices, and edge orchestration software to help accelerate building, operating, and scaling complex automation solutions at the edge. It will help customers grow operations quicker and help them achieve their digital transformation objectives faster.
Ultimately, the goal of these intelligent applications is to support better decision-making. Digital twins allow organizations to optimize decision-making by modelling possibilities based on actual past outcomes and the predicted future. In this area, in collaborative move with the W3C Consortium, Siemens and Microsoft have announced the convergence of the Digital Twin Definition Language (DTDL), the language used by Azure Digital Twins to describe digital twin models and interfaces, with the W3C Web of Things standard. This convergence will help consolidate digital twin definitions for assets in the industry and enable new technology innovation like automatic asset onboarding with the help of generative AI technologies.
Providing enterprise class resiliency, observability and security for edge applications
While Azure IoT Operations provides the foundation for industrial data flow, customer use cases are implemented in applications running on the edge that use that data. To that end, we’re investing in new capabilities to make it easier to build those applications. Today, we’re excited to announce three new capabilities for the development of enterprise-class Kubernetes applications running on the edge in the realms of application resiliency, observability and security.
Edge Storage Accelerator public preview – At the edge, Kubernetes storage capabilities vary in durability, persistence, and performance, posing a challenge for customers seeking reliable solutions. To address these challenges, we recently introduced Edge Storage Accelerator (ESA), a storage system designed for Arc-connected Kubernetes clusters. ESA offers fault-tolerant, highly available cloud-native persistent storage, empowering customers to confidently host stateful applications like Azure IoT Operations, custom apps, and other Arc extensions with ease and reliability. Through standard Kubernetes APIs, users can effortlessly attach containerized applications managing file data stored on Azure Blob storage, leveraging its limitless cloud storage capacity for edge applications. ESA’s flexible deployment options, simplified connection via a Container Storage Interface (CSI) driver, and platform neutrality transforms edge storage solutions, alleviating customer pain points and enabling seamless operations at the edge.
Azure Monitor pipeline public preview – As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. With today’s announcement, we are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency.
Secrets Sync Controller private preview – Industrial customers want the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the new Secret Synchronization Controller for Kubernetes (currently in private preview) automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a Pod.
Delivering simplified, cohesive management of physical operations environments
During HMI last week, we were also excited to announce the public preview of Azure Arc site manager. Arc site manager extends existing grouping constructs in Azure, allowing customers to group their resources, including Azure IoT Operations clusters, and assets by physical location. IT professionals can use Arc site manager to create sites to organize their Arc-enabled servers, clusters, and other assets, and view aggregated monitoring data. Arc site manager simplifies the overall monitoring and management of Azure resources by integrating individual resource pages, Azure Monitor, Update Management Center, and other offerings into a single cohesive experience. With Arc site manager, IT administrators can easily monitor health, updates, security, and other key areas for each site. Because Azure IoT Operations, along with the new services announced today are all Kubernetes based Arc-enabled services, they can be centrally managed using Arc site manager.
In addition to Azure Arc site manager, we also demonstrated a new Azure edge infrastructure solution for small form factor devices like the Lenovo ThinkEdge SE30 at the show. This new solution, which supported our Azure IoT Operations demo on the expo floor, runs AKS enabled by Azure Arc directly on bare metal with Azure Linux, with the option to cluster multiple nodes for availability. To learn more and register interest for the preview, head over to the Azure Stack blog.
We want to thank all the customers, partners and attendees who engaged with us at Hannover Messe 2024. We firmly believe Azure’s open and standardized strategy, an adaptive cloud approach, can help industrial organizations reach the next level of transformation and we’re excited to partner with you on that journey.
To learn more about how Azure’s adaptive cloud approach can help you cultivate insights across digital and physical operations, please read our latest blogs:
Advancing hybrid cloud to adaptive cloud with Azure | Microsoft Azure Blog
Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Accelerating Industrial Transformation with Azure IoT Operations – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
New Member Introduction and Question
Hi all,
Paul here.. I am new to MS Teams and Forms. My company merged into a regional one, and the management is big on tech. I enjoy working in the englineering and construction field because it gives me variety in environments and tasks. So far, I am comfortable with MS Office, especially the new Share feature, which lets me keep my reports on my OneDrive and share them with our admin! This is much better than the old attach to email method because I can fix mistakes and not have to send her multiple emails with the updates. Greetings!
My question:
We use “break cards” to record and transmit (via physical handoff) our laboratory data to the admin for relay to the clients. These cards are updated roughly 5 times over a month in data entry fields, but they also contain multiple fields such as job name and number that stay the same. I am in the Engineering/Construction field.
In perusing MS Forms, it seems this application is oriented towards surveys (administrative) and quizzes (education). Is there an anything for my application (above)?
Sincerely,
Paul
Hi all, Paul here.. I am new to MS Teams and Forms. My company merged into a regional one, and the management is big on tech. I enjoy working in the englineering and construction field because it gives me variety in environments and tasks. So far, I am comfortable with MS Office, especially the new Share feature, which lets me keep my reports on my OneDrive and share them with our admin! This is much better than the old attach to email method because I can fix mistakes and not have to send her multiple emails with the updates. Greetings! My question: We use “break cards” to record and transmit (via physical handoff) our laboratory data to the admin for relay to the clients. These cards are updated roughly 5 times over a month in data entry fields, but they also contain multiple fields such as job name and number that stay the same. I am in the Engineering/Construction field. In perusing MS Forms, it seems this application is oriented towards surveys (administrative) and quizzes (education). Is there an anything for my application (above)? Sincerely, Paul Read More
Is there a new CPOR Guide PDF?
I have this walk-through guide for claiming partner of record (CPOR). It’s from FY20 and the way you do it has since changed so it’s out of date. Is there a newer version anywhere?
Old FY20 version is here – https://partner.microsoft.com/en-us/asset/collection/claiming-partner-of-record-cpor-resources#/
I have this walk-through guide for claiming partner of record (CPOR). It’s from FY20 and the way you do it has since changed so it’s out of date. Is there a newer version anywhere?Old FY20 version is here – https://partner.microsoft.com/en-us/asset/collection/claiming-partner-of-record-cpor-resources#/ Read More
Improving the DevOps Experience for Azure Logic Apps Standard
With the trend towards distributed and native cloud apps, organizations are dealing with more distributed components across more environments. To maintain control and consistency, you can automate your environments and deploy more components faster with higher confidence by using DevOps tools and processes.
Azure Logic Apps Standard just launched a set of preview features that help you automate the steps in setting up DevOps processes for your applications. In this blog post, you will find more about these new features:
Parameterize connection references
Automate deployment scripts generation in Visual Studio Code
Enable zero downtime deployment scenarios
Parameterize connection references
Connectors in Azure Logic Apps enable seamless integration with external systems and services across different protocols, platforms, and authentication methods. Azure Logic Apps Standard separates the physical and logical aspects for connectors thanks to the connection reference file (connections.json), which maps the connections used in workflows to live connections using Azure Resources, Azure Functions, Azure API Management and in-app references).
Until now, these references were tied to the connection that you defined at design time, which made the process to abstract the code for multiple environments a manual process. However, starting with the Visual Studio Code extension for Azure Logic Apps version 4.4.3, connections are parameterized by default, which simplifies the process of deploying these applications to other environments.
What does connection reference parameterization look like?
In the connections.json file, new managed connections look like the following template:
“myconnection”: {
“api”: {
“id”: “/subscriptions/@{appsetting(‘WORKFLOWS_SUBSCRIPTION_ID’)}/providers/Microsoft.Web/locations/@{appsetting(‘WORKFLOWS_LOCATION_NAME’)}/managedApis/connectorname”
},
“connection”: {
“id”: “/subscriptions/@{appsetting(‘WORKFLOWS_SUBSCRIPTION_ID’)}/resourceGroups/@{appsetting(‘WORKFLOWS_RESOURCE_GROUP_NAME’)}/providers/Microsoft.Web/connections/myconnection”
},
“connectionRuntimeUrl”: “@{appsetting(myconnection-connectionRuntimeUrl’)}”,
“authentication”: “@parameters(myconnection-connectionAuthentication’)”
}
Property
Parameterization
api.id
Subscription and location are derived from app settings.
connection.id
Subscription and resource group are derived from app settings.
connection.connectionRuntimeUrl
This value is derived from app settings. The app setting key is defined as <connection_reference_name>-connectionRuntimeUrl.
connection.authentication
This value is derived from the parameters file. The key is defined as <connection_reference_name>-connectionAuthentication.
For connection authentication, a new entry is created in the parameters file, per the following template:
“myconnection-connectionAuthentication”: {
“type”: “Object”,
“value”: {
“type”: “Raw”,
“scheme”: “Key”,
“parameter”: “@appsetting(myconnection-connectionKey’)”
}
}
Note: As a secret, the connection key is referenced in your app settings. Connection keys have different values for local and Azure deployments. When deployed to Azure, the connection key value should reference the managed identity associated with your Standard logic app resource. The latest Visual Studio Code extension also has the capability to auto-generate deployment scripts, which makes sure that you have a ready-to-use cloud version of the parameters file, so that you don’t have to guess at the changes.
Opt in for connection parameterization
This experience is an opt-in for you as you might already have projects in flight that use your own solution for parameterization. After you install extension version 4.4.3, you get the following pop-up message during new project startup:
Option
Action
Yes
This option enables connection parameterization and updates any project that you open with the new parameterization capability.
No
This option doesn’t enable parameterization for your current project but asks again the next time that you open a project.
Don’t warn again
This option opts out from the parameterization feature and doesn’t show the message again. However, you can opt in later at any time.
To opt in later, go to the extension settings in Visual Studio Code and select the following option:
Automate deployment scripts generation
You can generate ARM templates and Azure DevOps pipelines to support deployment automation for your Standard logic apps, starting with the Visual Studio Code extension for Azure Logic Apps Standard version 4.4.3.
For more information and full walkthrough that shows how to generate and connect these templates to your Azure DevOps platform, see our official documentation at Automate build and deployment for Standard logic app workflows with Azure DevOps.
Azure Logic Apps Build and Release Actions for Azure DevOps
Two new actions now exist for Azure DevOps, which the Visual Studio Code extension uses to generate build and release pipelines:
Azure Logic Apps Standard Build
Azure Logic Apps Standard Release
Before you can use this new pipeline capability, you must first install these actions, which you can find on the Visual Studio Marketplace.
Enable zero downtime deployment scenarios
To deploy mission-critical logic apps that are always available and responsive, even during updates or maintenance, you can enable zero downtime deployment by creating and using deployment slots. Zero downtime means that when you deploy new versions of your app, end users shouldn’t experience disruption or downtime. Deployment slots, which are now available in public preview for Azure Logic Apps, are isolated nonproduction environments that host different versions of your Standard logic app and provide the following benefits:
Swap a deployment slot with your production slot without interruption. That way, you can update your logic app and workflows without affecting availability or performance.
Validate any changes in a deployment slot before you apply those changes to the production slot.
Roll back to a previous version, if anything goes wrong with your deployment.
Reduce the risk of negative performance when you must exceed the recommended number of workflows per logic app.
For more information, see our official documentation at Set up deployment slots to enable zero downtime deployment in Azure Logic Apps.
Microsoft Tech Community – Latest Blogs –Read More
New Planner experience in Teams showing all tasks except Project tasks
I have the new Planner experience in Teams. In My Tasks and My Day sections I can see Planner tasks, Loop tasks, Flagged email tasks but I don’t see tasks from Premium Plans a.k.a. Projects. Everything that I have read says that I should be seeing my tasks from Premium Plans here as well. Anyone else experiencing the same issue?
I have the new Planner experience in Teams. In My Tasks and My Day sections I can see Planner tasks, Loop tasks, Flagged email tasks but I don’t see tasks from Premium Plans a.k.a. Projects. Everything that I have read says that I should be seeing my tasks from Premium Plans here as well. Anyone else experiencing the same issue? Read More
Is there any available pricing for Copilot for Finance?
I’d like to get this on my plan of record with IT and want to estimate a budget but have no idea of the pricing model for this offering.
I’d like to get this on my plan of record with IT and want to estimate a budget but have no idea of the pricing model for this offering. Read More
New Planner Experience not showing Premium Plan (MS Project) tasks
I have the new Planner experience in Teams. In My Tasks and My Day sections I can see Planner tasks, Loop tasks, Flagged email tasks but I don’t see tasks from Premium Plans a.k.a. Projects. Everything that I have read says that I should be seeing my tasks from Premium Plans here as well. Anyone else experiencing the same issue?
I have the new Planner experience in Teams. In My Tasks and My Day sections I can see Planner tasks, Loop tasks, Flagged email tasks but I don’t see tasks from Premium Plans a.k.a. Projects. Everything that I have read says that I should be seeing my tasks from Premium Plans here as well. Anyone else experiencing the same issue? Read More
03 Azure Machine Learning and OSS Model Fine tuning
Hyperparameter optimization, also known as hyperparameter tuning, is a fundamental challenge in the field of machine learning. It involves the selection of an optimal set of hyperparameters for a given learning algorithm. Hyperparameters are parameters that dictate the behavior of the learning process, while other parameters, such as node weights, are learned from the data.
A machine learning model can often require different constraints, weights, or learning rates to effectively capture diverse data patterns. These adjustable measures, known as hyperparameters, must be carefully tuned to ensure that the model can successfully solve the machine learning problem at hand. Hyperparameter optimization seeks to find a combination of hyperparameters that yields an optimal model, minimizing a predefined loss function on independent data.
To achieve this, an objective function is utilized, which takes a set of hyperparameters as input and returns the corresponding loss. The goal is to find the set of hyperparameters that maximizes the generalization performance of the model. Cross-validation is commonly employed to estimate this performance and aid in the selection of optimal hyperparameter values. By maximizing the generalization performance, hyperparameter optimization plays a crucial role in enhancing the overall effectiveness and accuracy of machine learning models.
In this post we will cover the open-source tools and Azure Machine learning tools around hyperparameter tuning. There are three main techniques used for hyperparameter, Grid Search, Random Search, Bayesian Search and more commonly used for Neural Networks, gradient based optimization.
Grid Search
In the realm of hyperparameter optimization, the conventional approach has been to employ grid search or parameter sweep. This technique involves exhaustively exploring a predetermined subset of hyperparameters for a learning algorithm. To guide the grid search algorithm, a performance metric is selected, often determined through cross-validation on the training set or evaluation on a dedicated validation set.
Grid search operates by systematically testing different combinations of hyperparameters within the defined subset. This method, however, can become computationally expensive, especially when dealing with a large number of hyperparameters or a wide range of possible values. Despite its limitations, grid search remains widely used due to its simplicity and interpretability.
During the grid search process, various performance metrics are measured for each combination of hyperparameters. These metrics aid in assessing the model’s effectiveness and allow for the identification of hyperparameter configurations that lead to optimal performance. By evaluating the model’s performance on either the training set or a separate validation set, grid search facilitates the selection of the most appropriate hyperparameter values.
While grid search has proven to be a valuable technique, alternative methods have emerged to address its shortcomings, such as the computationally efficient and automated approaches of Bayesian optimization and random search. These advanced methods provide more sophisticated ways to explore the hyperparameter space and discover optimal configurations, revolutionizing the field of hyperparameter optimization.
sklearn.model_selection.GridSearchCV — scikit-learn 1.4.2 documentation
Random Search
Random Search offers an alternative to the exhaustive enumeration of all possible combinations of hyperparameters by randomly selecting them. This approach can be applied not only to discrete settings but also to continuous and mixed spaces, providing greater flexibility. Random Search has been found to outperform Grid Search, particularly in scenarios where only a small number of hyperparameters significantly impact the final performance of the machine learning algorithm.
In cases where the optimization problem exhibits a low intrinsic dimensionality, Random Search proves to be particularly effective. This refers to situations where the hyperparameters’ interdependencies are limited, allowing for a more efficient exploration of the hyperparameter space. Moreover, Random Search lends itself to embarrassingly parallel implementation, meaning that it can be easily distributed across multiple computing resources for faster processing.
One of the advantages of Random Search is its ability to incorporate prior knowledge by specifying the distribution from which to sample hyperparameters. This enables domain experts to guide the search process based on their understanding of the problem at hand. Despite its simplicity, Random Search remains a significant baseline against which new hyperparameter optimization methods can be compared.
While Random Search has been instrumental in advancing hyperparameter optimization, it is important to note that other sophisticated techniques, such as Bayesian optimization, have emerged as promising alternatives. These methods leverage probabilistic models to intelligently explore the hyperparameter space and efficiently find optimal configurations. The continuous development of new approaches continues to enhance the field of hyperparameter optimization, offering exciting opportunities for improving the performance and efficiency of machine learning models.
sklearn.model_selection.RandomizedSearchCV — scikit-learn 1.4.2 documentation
Bayesian Search
Bayesian optimization is a powerful method for globally optimizing noisy black-box functions. When applied to hyperparameter optimization, Bayesian optimization constructs a probabilistic model that captures the relationship between hyperparameter values, and the objective function evaluated on a validation set. Through an iterative process, Bayesian optimization intelligently selects hyperparameter configurations based on the current model, evaluates their performance, and updates the model to gather valuable information about the function and, more importantly, the location of the optimum.
The key idea behind Bayesian optimization is to strike a balance between exploration and exploitation. Exploration involves selecting hyperparameters that yield uncertain outcomes, while exploitation focuses on hyperparameters that are expected to be close to the optimum. By carefully navigating this trade-off, Bayesian optimization effectively explores the hyperparameter space, gradually narrowing down the search to regions with higher potential for optimal performance.
In practice, Bayesian optimization has demonstrated superior performance compared to traditional methods such as grid search and random search. This advantage stems from its ability to reason about the quality of experiments before actually running them. By leveraging the probabilistic model, Bayesian optimization can make informed decisions about which hyperparameter configurations are most likely to lead to better results, thereby reducing the number of evaluations required.
The efficiency and effectiveness of Bayesian optimization have been widely observed in various domains. Researchers and practitioners have embraced this approach due to its ability to achieve better outcomes with fewer evaluations, making it an invaluable tool for hyperparameter optimization.
skopt.BayesSearchCV — scikit-optimize 0.8.1 documentation
Hyperparameter optimization – Wikipedia
Azure
Having familiarized ourselves with the foundational aspects of hyperparameter tuning, a natural inquiry arises: Does Azure Machine Learning (AML) provide support for hyperparameter tuning? The answer to this question is undoubtedly affirmative, as AML offers comprehensive hyperparameter tuning capabilities. Detailed documentation on this topic can be accessed in the link below, providing users with valuable guidance and information.
Hyperparameter tuning a model (v2) – Azure Machine Learning | Microsoft Learn
AML, through its Python SDK V2, facilitates hyperparameter tuning by offering three distinct algorithms: Grid, Random, and Bayesian. These algorithms empower users to effectively explore the hyperparameter search space and optimize their machine learning models. To leverage the hyperparameter tuning capabilities in AML, the following essential steps can be followed:
Define the parameter search space for your trial: Specify the range and feasible values for each hyperparameter that will undergo tuning.
Specify the sampling algorithm for your sweep job: Select the desired algorithm that will be employed to sample hyperparameter configurations during the tuning process.
Specify the objective to optimize: Define the performance metric or objective function that will be utilized to evaluate and compare the various hyperparameter configurations.
Specify an early termination policy for low-performing jobs: Establish criteria that will automatically terminate underperforming jobs during the hyperparameter tuning process.
Define limits for the sweep job: Set the maximum number of iterations or allocate resources according to your requirements for the hyperparameter tuning experiment.
Launch an experiment with the defined configuration: Initiate the hyperparameter tuning experiment by utilizing the specified settings and parameters.
Visualize the training jobs: Monitor and analyze the progress and outcomes of the hyperparameter tuning experiment, including the performance of individual training jobs.
Select the best configuration for your model: Upon completion of the hyperparameter tuning experiment, identify the hyperparameter configuration that yielded the most favorable performance, and incorporate it into your machine learning model.
Principal author:
Shep Sheppard | Senior Customer Engineer, FastTrack for ISV and Startups
Other contributors:
Yoav Dobrin Principal Customer Engineer, FastTrack for ISV and Startups
Jones Jebaraj | Senior Customer Engineer, FastTrack for ISV and Startups
Olga Molocenco-Ciureanu | Customer Engineer, FastTrack for ISV and Startups
Microsoft Tech Community – Latest Blogs –Read More
sick to death of new outlook replacing standard mail app
Every week the new outlook auto switchs over from the standard mail box .
I click the slider button to disapble the new outlook again and it asks my why i want dont want the new outlook mail box ,i skip this as i have filled it in so many times GRrrrr .
I dont like the new outlook ,its to busy and has to many options that i just dont want or need .
I like the simple mail box that came with windows 11 just fine .
Every week without fail it it upgraded its selt to the new outlook.
Im sick of this crap , stop puching your crap software at me ,i dont want it , i wish i had kept my macbook!
Every week the new outlook auto switchs over from the standard mail box .I click the slider button to disapble the new outlook again and it asks my why i want dont want the new outlook mail box ,i skip this as i have filled it in so many times GRrrrr . I dont like the new outlook ,its to busy and has to many options that i just dont want or need .I like the simple mail box that came with windows 11 just fine .Every week without fail it it upgraded its selt to the new outlook. Im sick of this crap , stop puching your crap software at me ,i dont want it , i wish i had kept my macbook! Read More
AWS S3 to SQL Server
I have a file on Amazon S3 that updates daily, and it’s in SQL format. With these options, can I select a specific database to create and insert all the data into it in SQL Server?
Essentially, I’m asking if I can transfer data from Amazon S3 to SQL Server and, once imported, run SQL queries directly on the SQL Server?
I have a file on Amazon S3 that updates daily, and it’s in SQL format. With these options, can I select a specific database to create and insert all the data into it in SQL Server?Essentially, I’m asking if I can transfer data from Amazon S3 to SQL Server and, once imported, run SQL queries directly on the SQL Server? Read More