Tag Archives: microsoft
IIS 10 serving wrong ssl cert
All sites on our server when browsed to are showing one wildcard ssl cert no matter that these other sites have a different ssl cert set in bindings.
I checked every binding and made sure to select the specific IP address instead of all unassigned like some forums said. All the bindings with ssl have a host name and Require Server Name Indication Checked.
I don’t know what else to do! Is there any help you can give me on this? I hadn’t changed any SSL or binding settings recently and they just stopped working today.
It just serves the *.hoaguru ssl certificate to all sites even if they are pointing to a different ssl cert in the bindings.
The sites that do use the *.hoaguru.com ssl cert are very important. I cannot remove them.
All sites on our server when browsed to are showing one wildcard ssl cert no matter that these other sites have a different ssl cert set in bindings. I checked every binding and made sure to select the specific IP address instead of all unassigned like some forums said. All the bindings with ssl have a host name and Require Server Name Indication Checked.I don’t know what else to do! Is there any help you can give me on this? I hadn’t changed any SSL or binding settings recently and they just stopped working today.It just serves the *.hoaguru ssl certificate to all sites even if they are pointing to a different ssl cert in the bindings.The sites that do use the *.hoaguru.com ssl cert are very important. I cannot remove them. Read More
Help with concatenate with IF function for blank date
I am using Concatenate (CONCAT) to pull 3 date values from a worksheet within a workbook.
=CONCAT(TEXT((‘Liability Schedule’!F219),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F220),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F221),”mm/dd/yyyy”))
However, if the Liability Schedule doesn’t have a value (blank), excel will default to 01/00/1900. I’m trying to expand the rule that IF the referring cell is blank in the Liability Schedule, I want to replace it with “N/A”
I am using Concatenate (CONCAT) to pull 3 date values from a worksheet within a workbook.=CONCAT(TEXT((‘Liability Schedule’!F219),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F220),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F221),”mm/dd/yyyy”))However, if the Liability Schedule doesn’t have a value (blank), excel will default to 01/00/1900. I’m trying to expand the rule that IF the referring cell is blank in the Liability Schedule, I want to replace it with “N/A” Read More
Highlighting Tasks that need attention but someone with out assigning it to them.
I have a request from a User who uses Planner to manage projects for a multi-functional team. They want a way to easily see what tasks and projects require their attention. I thought creating a custom priority or a Flag to mark these would be a good idea, but I can’t find how to do that. Has anyone else had this issue or has a solution worked for them?
I have a request from a User who uses Planner to manage projects for a multi-functional team. They want a way to easily see what tasks and projects require their attention. I thought creating a custom priority or a Flag to mark these would be a good idea, but I can’t find how to do that. Has anyone else had this issue or has a solution worked for them? Read More
Dev Channel update to 129.0.2752.4 is live.
Hello Insiders! We released 129.0.2752.4 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Added Features:
Added an observer to track extension uninstalls in Browser Essentials.
Improved Reliability:
Fixed an issue where the browser crashes when toggling off Gamer Mode.
Changed Behavior:
Resolved an issue where browser specific attributes were not visible in the tooltip under autofill.
Resolved an issue where the ‘X’ icon was not clearly visible on the ‘Leave’ dialog in Dark mode under personalization.
Resolved an issue where clicking the ‘back’ button in the header would close the pane instead of navigating back to the customization page under personalization.
Resolved an issue where tabs failed to close in the tab center, causing UI display abnormalities.
Fixed an issue where the captured selection could extend beyond the screen range in screenshots.
Fixed an issue where open tab groups were not hidden in the tab group pane.
Fixed an issue where the bubble notification was displayed even when the sidebar was hidden.
Fixed an issue where, in dark mode, both the font and page background were white, rendering the content unreadable on the workspaces-internal page.
Android:
Fixed an issue where the Omnibox action icon was incorrect on Android.
Resolved an issue where the title of top sites was not fully displayed in the ‘Frequently Visited’ section when added to the home page on Android.
iOS:
Resolved an issue where the menu on bing.com could not be opened on iOS.
Fixed an issue where the ‘Default browser prompt’ was difficult to appear under the default browser on iOS.
Resolved an issue where the Tab center background color appeared black in light mode on iOS.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri
Hello Insiders! We released 129.0.2752.4 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Looking back on FY24: from Copilots empowering human achievement to leading AI Transformation – The Official Microsoft Blog
Added Features:
Added an observer to track extension uninstalls in Browser Essentials.
Improved Reliability:
Fixed an issue where the browser crashes when toggling off Gamer Mode.
Changed Behavior:
Resolved an issue where browser specific attributes were not visible in the tooltip under autofill.
Resolved an issue where the ‘X’ icon was not clearly visible on the ‘Leave’ dialog in Dark mode under personalization.
Resolved an issue where clicking the ‘back’ button in the header would close the pane instead of navigating back to the customization page under personalization.
Resolved an issue where tabs failed to close in the tab center, causing UI display abnormalities.
Fixed an issue where the captured selection could extend beyond the screen range in screenshots.
Fixed an issue where open tab groups were not hidden in the tab group pane.
Fixed an issue where the bubble notification was displayed even when the sidebar was hidden.
Fixed an issue where, in dark mode, both the font and page background were white, rendering the content unreadable on the workspaces-internal page.
Android:
Fixed an issue where the Omnibox action icon was incorrect on Android.
Resolved an issue where the title of top sites was not fully displayed in the ‘Frequently Visited’ section when added to the home page on Android.
iOS:
Resolved an issue where the menu on bing.com could not be opened on iOS.
Fixed an issue where the ‘Default browser prompt’ was difficult to appear under the default browser on iOS.
Resolved an issue where the Tab center background color appeared black in light mode on iOS.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri Read More
selecting every 46th row in excel then copy/ paste into new page
Hi,
I have approx. 160,000 rows of data
Ideally I want to choose every 46th row; and then use them as my sample.
OR
How can I choose 3466 random rows from an excel document of 160,000 and put in new document
Hi,I have approx. 160,000 rows of dataIdeally I want to choose every 46th row; and then use them as my sample.ORHow can I choose 3466 random rows from an excel document of 160,000 and put in new document Read More
The excel function proper is not working
I have a cell in column B that has all CAPS. I would like to set this to Camel Case.
First letter Cap and remaining letter lower case. Second Word should have the first letter cap and remaining letters lower case. I am on Office 365.
This is what I use and notice the value for the column B does not get generated.
I did format the the column B cells to general. The formula exists in column C.
In column B when I type the formula it does appear as formula.
XICENTE CAYANS=proper(B10)
I have a cell in column B that has all CAPS. I would like to set this to Camel Case.First letter Cap and remaining letter lower case. Second Word should have the first letter cap and remaining letters lower case. I am on Office 365. This is what I use and notice the value for the column B does not get generated. I did format the the column B cells to general. The formula exists in column C.In column B when I type the formula it does appear as formula. XICENTE CAYANS=proper(B10) Read More
Major Version Upgrades for Azure Database for MySQL Flexible Server Burstable SKU on Azure Portal
We’re excited to announce a significant improvement for Azure Database for MySQL users, the ability to perform major version upgrades directly on Burstable SKU compute tiers through the Azure portal. This enhancement makes it easier than ever to upgrade to the latest MySQL versions with just a few clicks.
Why this matters
Major version upgrades are critical for accessing the latest features, performance improvements, and security enhancements in MySQL. However, these upgrades can be resource-intensive, demanding substantial CPU and memory resources. Burstable SKU instances, which are optimized for cost efficiency with variable performance, are credit based and often face challenges in handling these upgrades due to their limited resources.
Due to the challenges mentioned above, major version upgrades were not supported directly on Burstable SKU instances previously. Users had to manually upgrade to a General Purpose (GP) or Business Critical (BC) SKU before initiating the upgrade. After the upgrade, users needed to either downgrade back to the original Burstable SKU or decide to stay on the GP or BC SKU, followed by necessary clean-up tasks. This manual process was cumbersome and time-consuming.
To overcome this, we’ve streamlined the upgrade process. When you initiate a major version upgrade on a Burstable SKU instance, the system automatically upgrades the compute tier to a General Purpose SKU. This ensures that the upgrade process has the necessary resources to complete successfully.
Key benefits
The key benefits of this functionality are detailed in the following sections.
Seamless upgrade process
The new upgrade process is designed to be seamless and user-friendly. Here’s how it works:
1. Initiate the upgrade: In the Azure portal, select your existing Azure Database for MySQL Burstable SKU server, and then select Upgrade.
2. Validate schema compatibility: To help identify any potential issues that could disrupt the upgrade, before proceeding, use Oracle’s official tool to validate that your current database schema is compatible with MySQL 8.0.
When you use Oracle’s official tool to check schema compatibility, you will encounter some warnings indicating unexpected tokens in stored procedures, such as:
mysql.az_replication_change_master – at line 3,4255: unexpected token ‘REPLICATION’
mysql.az_add_action_history – PROCEDURE uses obsolete NO_AUTO_CREATE_USER sql_mode
You can safely ignore these warnings. They refer to built-in stored procedures prefixed with mysql., which are used to support Azure MySQL features. These warnings do not affect the functionality of your database.
3. Automatic upgrade to the Compute tier: To ensure sufficient resources are available for the upgrade, the system will automatically upgrade your Burstable service tier instance to use the General Purpose service tier.
4. Select which service tier to use after the upgrade: During the initial upgrade steps, you’ll be prompted to select whether to remain on the General Purpose service tier or revert to the Burstable service tier after the upgrade completes.
5. Perform the upgrade: The major version upgrade to MySQL 8.0 is performed seamlessly.
6. Post-Upgrade Option: After the upgrade, the system will either retain the General Purpose SKU or revert to Burstable SKU based on the selection you made during the initial upgrade steps (the default option is to use B2S).
Enhanced reliability
By ensuring that your compute tier has adequate resources, this new process significantly enhances the reliability of major version upgrades. You can be confident that your upgrade will proceed smoothly, reducing the risk of interruptions or failures.
Cost management
We understand that cost management is a key concern for our users. However, while upgrading to a General Purpose SKU will incur additional costs, using this approach helps ensure the success of your upgrade. As a result, you can avoid the potential costs and downtime associated with failed upgrade attempts.
Conclusion
Upgrading your Azure Database for MySQL instance based on the Burstable service tier to a major new version is now simpler and more efficient. With just a few clicks in the Azure portal, you can ensure that your database is up-to-date and take advantage of the latest MySQL features and improvements. For more detailed information and step-by-step instructions, please visit our documentation page.
We’re committed to continuously improving your experience with Azure Database for MySQL. We hope this new feature helps you manage your databases more effectively and so that you can take full advantage of the powerful capabilities of MySQL.
If you have any questions about the information provided in this post, please leave a comment below or contact us directly at AskAzureDBforMySQL@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More
Expanding GenAI Gateway Capabilities in Azure API Management
In May 2024, we introduced GenAI Gateway capabilities – a set of features designed specifically for GenAI use cases. Today, we are happy to announce that we are adding new policies to support a wider range of large language models through Azure AI Model Inference API. These new policies work in a similar way to the previously announced capabilities, but now can be used with a wider range of LLMs.
Azure AI Model Inference API enables you to consume the capabilities of models, available in Azure AI model catalog, in a uniform and consistent way. It allows you to talk with different models in Azure AI Studio without changing the underlying code.
Working with large language models presents unique challenges, particularly around managing token resources. Token consumption impacts cost and performance of intelligent apps calling the same model, making it crucial to have robust mechanisms for monitoring and controlling token usage. The new policies aim to address challenges by providing detailed insights and control over token resources, ensuring efficient and cost-effective use of models deployed in Azure AI Studio.
LLM Token Limit Policy
LLM Token Limit policy (preview) provides the flexibility to define and enforce token limits when interacting with large language models available through the Azure AI Model Inference API.
Key Features
Configurable Token Limits: Set token limits for requests to control costs and manage resource usage effectively
Prevents Overuse: Automatically blocks requests that exceed the token limit, ensuring fair use and eliminating the noisy neighbour problem
Seamless Integration: Works seamlessly with existing applications, requiring no changes to your application configuration
Learn more about this policy here.
LLM Emit Token Metric Policy
LLM Emit Token Metric policy (preview) provides detailed metrics on token usage, enabling better cost management and insights into model usage across your application portfolio.
Key Features
Real-Time Monitoring: Emit metrics in real-time to monitor token consumption.
Detailed Insights: Gain insights into token usage patterns to identify and mitigate high-usage scenarios
Cost Management: Split token usage by any custom dimension to attribute cost to different teams, departments, or applications
Learn more about this policy here.
LLM Semantic Caching Policy
LLM Semantic Caching policy (preview) is designed to reduce latency and reduce token consumption by caching responses based on the semantic content of prompts.
Key Features
Reduced Latency: Cache responses to frequently requested queries based to decrease response times.
Improved Efficiency: Optimize resource utilization by reducing redundant model inferences.
Content-Based Caching: Leverages semantic similarity to determine which response to retrieve from cache
Learn more about this policy here.
Get Started with Azure AI Model Inference API and Azure API Management
We are committed to continuously improving our platform and providing the tools you need to leverage the full potential of large language models. Stay tuned as we roll out these new policies across all regions and watch for further updates and enhancements as we continue to expand our capabilities. Get started today and bring your intelligent application development to the next level with Azure API Management.
Microsoft Tech Community – Latest Blogs –Read More
Transforme o Desenvolvimento com .NET Aspire: Integração com JavaScript e Node.js
No cenário em constante evolução do desenvolvimento de aplicações em nuvem, gerenciar configurações, garantir resiliência e manter a integração perfeita entre vários componentes pode ser bastante desafiador.
E, é justamente nesse caso que entra em cena o .NET Aspire! Uma estrutura de desenvolvimento de aplicação totalmente robusta e projetada para simplificar essas complexidades, permitindo que os desenvolvedores(as) se concentrem na criação de recursos em vez de lidar com extensas configurações.
Neste artigo exploraremos os aspectos centrais do .NET Aspire, examinando seus benefícios, o processo de configuraçao e a integração com JavaScript, conforme apresentado em sua sessão fenomenal no último evento do .NET Aspire Developers Day, pelo Chris Noring que é Senior Developer Advocate na Microsoft.
.NET Aspire Developers Day
No último evento do .NET Aspire Developers Day, que ocorreu no dia 23 de Julho de 2024, foi um evento repleto de sessões técnicas e práticas, com diferentes linguagens de programação e frameworks. Pois esse foi o grande objetivo do evento online: mostrar o quão adaptável, flexível e fácil é desenvolver aplicações modernas com o poder do .NET Aspire!
Caso você tenha perdido o evento, não se preocupe! Deixo aqui o link da gravação do evento para que você possa assistir e aprender mais sobre o .NET Aspire e suas funcionalidades em diferentes cenários de desenvolvimento de software.
.NET Aspire Developer Days – Online Event
Mas, o que seria o .NET Aspire? Vamos descobrir agora mesmo!
Entendendo o .NET Aspire
O .NET Aspire é uma estrutura pronta para a nuvem que ajuda a construir aplicações distribuídas e prontas para produção. Ele vem com pacotes NuGet que facilitam o desenvolvimento de apps que, em vez de serem monolíticos, são formados por pequenos serviços interconectados, os famosos microsserviços.
Objetivo do .NET Aspire
O objetivo do .NET Aspire é melhorar a experiência de desenvolvimento, especialmente quando você está criando apps na nuvem. Ele oferece ferramentas e padrões que tornam tudo mais fácil, desde a configuração até a execução das aplicações distribuídas. A orquestração no .NET Aspire foca em simplificar o ambiente de desenvolvimento local, conectando projetos e suas dependências de forma automática, sem que você precise se preocupar com detalhes técnicos.
Orquestração Simplificada
A orquestração no .NET Aspire foca na simplificação do ambiente de desenvolvimento localm automatizando a configuração e a interconexão de múltiplos projetos e suas dependências. Embora não substitua sistemas robustos usados em produção, como o Kubernetes, o .NET Aspire fornece abstrações que tornam o setup de descoberta de serviços, variáveis de ambiente e configurações de contêiner mais acessíveis e consistentes.
Componentes Prontos para Uso
O .NET Aspire também vem com componentes prontos para uso, como Redis ou PostgreSQL, que você pode adicionar ao seu projeto com poucas linhas de código. Além disso, ele inclui templates de projetos e ferramentas para Visual Studio, Visual Studio Code e CLI do .NET, facilitando ainda mais a criação e gestão dos seus projetos.
Exemplo de Uso
Por exemplo, com algumas linhas de código, você pode adicionar um contêiner Redis e configurar automaticamente a connection string no projeto frontend:
var builder = DistributedApplication.CreateBuilder(args);
var cache = builder.AddRedis(“cache”);
builder.AddProject<Projects.MyFrontend>(“frontend”)
.WithReference(cache);
Se você deseja saber mais sobre o .NET Aspire, recomendo que acesse a documentação oficial do .NET Aspire, que está repleta de informações detalhadas e exemplos práticos para você começar a desenvolver suas aplicações com o .NET Aspire.
Acesse agora mesmo a documentação oficial do .NET Aspire: Documentação Oficial do .NET Aspire
Iniciando com o .NET Aspire
Durante a sessão do .NET Aspire Developers Day, o Chris Noring apresentou uma integração incrível entre o .NET Aspire e JavaScript, mostrando como é possível criar aplicações modernas e distribuídas com o poder do .NET Aspire e a flexibilidade do JavaScript.
Se você deseja assistir a sessão completa do Chris Noring, acesse o link abaixo:
Antes ele começou explicando como é fácil realizar a configuração para começar a usar o .NET Aspire, que há necessidade de instalar:
.NET 8
.NET Aspire Workload
OCI Compatível com o Docker ou Podman
Visual Studio Code ou Visual Studio
Extensão: C# Dev Kit
A estruturação de um projeto .NET Aspire é simples e pode ser feita usando Visual Studio, Visual Studio Code ou simplesmente o terminal.
Por exemplo, você pode criar um novo projeto usando o terminal com o seguinte comando:
dotnet new aspire-starter
Este comando gera uma estrutura de projeto que inclui componentes essenciais como o AppHost (o cérebro da operação), ServiceDefaults e uma aplicação inicial.
Após estruturar o projeto, o próximo passo é justamente executar. Porém se faz necessário se certificar e garantir que o HTTPS esteja habilitado, pois o .NET Aspire requer HTTPS para funcionar.
Para habilitar o HTTPS, você pode usar o seguinte comando:
dotnet dev-certs https –trust
E, finalmente, para executar o projeto, basta usar o comando:
dotnet run
Ao executar o projeto AppHost, abrirá um painel exibindo todos os recursos dentro do seu projeto, como APIs e serviços de front-end. Este painel fornece insights valiosos sobre as métricas, logs e solicitações ativas da sua aplicação, facilitando o monitoramento e a depuração da sua aplicação em nuvem.
Tudo isso o Chris Noring mostrou durante a sessão do .NET Aspire Developers Day, demonstrando como é fácil e prático começar a desenvolver aplicações modernas com o .NET Aspire.
Se desejar, recomendo a leitura do tutorial: “Quickstart: Build your first .NET Aspire project” que está disponível na documentação oficial do .NET Aspire.
Um pouco mais sobre Orquestração com .NET Aspire
Vamos explorar um pouco mais o que o Chris Noring mostrou nessa parte da sessão.
A orquestração de aplicações distribuídas com o .NET Aspire envolve a configuração e a conexão dos vários componentes que compõem a aplicação. O arquivo aspire-manifest.json é uma peça central nesse processo, documentando como os serviços se conectam e configuram dentro da aplicação.
Essa automatização facilita a vida do desenvolvedor, eliminando a necessidade de configurar manualmente cada conexão e dependência.
O Papel do aspire-manifest.json
O aspire-manifest.json é um arquivo JSON gerado automaticamente pelo .NET Aspire, que contém todas as informações necessárias sobre os recursos e componentes da aplicação.
Ele inclui detalhes como strings de conexão, variáveis de ambiente, portas e protocolos de comunicação. Este manifesto garante que todos os serviços da aplicação se conectem corretamente e funcionem em harmonia.
Vejamos o exemplo replicado pelo Chris Noring durante a sessão em como configurar um cache Redis e uma API de Produtos desenvolvida em Node.js utilizando o arquivo Program.cs:
var cache = builder.AddRedis(“cache”);
var productApi = builder.AddNpmApp(“productapi”, “../NodeApi”, “watch”)
.WithReference(cache)
.WithHttpEndpoint(env: “PORT”)
.WithExternalHttpEndpoints()
.PublishAsDockerFile();
Neste exemplo, o Redis é configurado como um serviço de cache, e a API de produtos, desenvolvida em Node.js, é configurada para utilizar esse cache. O método WithReference(cache) assegura que a API de produtos possa se conectar ao Redis. O método PublishAsDockerFile() cria um Dockerfile para a aplicação, permitindo sua execução em um contêiner.
Como o Manifesto Reflete Essas Configurações?
Bom, uma vez que o código é executado o .NET Aspire gera um arquivo aspire-manifest.json que reflete todas as configurações feitas no código. Nessa parte o Chris explica que como o manifesto documenta a configuração do Redis e da API de Produtos:
{
“productapi”: {
“type”: “dockerfile.v0”,
“path”: “../NodeApi/Dockerfile”,
“context”: “../NodeApi”,
“env”: {
“NODE_ENV”: “development”,
“ConnectionStrings__cache”: “{cache.connectionString}”,
“PORT”: “{productapi.bindings.http.port}”
},
“bindings”: {
“http”: {
“scheme”: “http”,
“protocol”: “tcp”,
“transport”: “http”,
“targetPort”: 8000,
“external”: true
}
}
}
}
Neste trecho do manifesto, podemos ver que a API de produtos (productapi) está configurada para utilizar a string de conexão do Redis (ConnectionStrings__cache), que é automaticamente gerada e injetada no ambiente da aplicação. Além disso, o manifesto especifica que a API de produtos será exposta via HTTP na porta 8000.
Como Configurar ou Atualizar o Manifesto?
Para gerar ou atualizar o arquivo aspire-manifest.json, você pode usar o seguinte comando:
dotnet run –publisher manifest –output-path aspire-manifest.json
Esse comando executa a aplicação e gera o manifesto, que é muito importante para a implantação em ambientes de produção ou para testes em desenvolvimento.
Integrando JavaScript com .NET Aspire
A flexibilidade do .NET Aspire se estende à integração com JavaScript, suportando tanto o desenvolvimento de Front-end quanto de Back-end. Essa capacidade permite que os desenvolvedores usem frameworks e bibliotecas JavaScript populares juntamente com componentes .NET, criando um ambiente de desenvolvimento unificado.
Exemplo de Front-End com Angular
Na palestra de Chris Noring, foi demonstrado como o .NET Aspire pode ser integrado a um projeto de front-end desenvolvido em Angular. A configuração de backend e a conexão com APIs são simplificadas com o uso de variáveis de ambiente, que são automaticamente geradas e injetadas no projeto.
Configuração de Backend no Angular
O arquivo proxy.conf.js é utilizado para redirecionar as chamadas de API no ambiente de desenvolvimento para o backend correto. As URLs do backend, que podem variar entre ambientes, são gerenciadas usando variáveis de ambiente. Veja um exemplo de configuração:
module.exports = {
“/api”: {
target: process.env[“services__weatherapi__https__0”] || process.env[“services__weatherapi__http__0”],
secure: process.env[“NODE_ENV”] !== “development”,
pathRewrite: { “^/api”: “” },
},
};
Neste exemplo, o target é definido com base nas variáveis de ambiente services__weatherapi__https__0 ou services__weatherapi__http__0, que são injetadas automaticamente pelo .NET Aspire. Essa configuração garante que o Frontend Angular possa se conectar corretamente ao serviço Backend, independentemente do ambiente (desenvolvimento, teste, produção).
Uso do HttpClient no Angular
No código Angular, a interação com o backend pode ser feita usando o serviço HttpClient, como mostrado no exemplo a seguir:
constructor(private http: HttpClient) {
this.http.get<WeatherForecast[]>(‘api/weatherforecast’).subscribe({
next: result => this.forecasts = result,
error: console.error
});
}
Neste trecho, a chamada à API api/weatherforecast é redirecionada automaticamente para o backend correto, graças à configuração feita no proxy.conf.js. Isso simplifica a comunicação entre o frontend Angular e o backend, garantindo que as variáveis de ambiente configuradas no manifesto do .NET Aspire sejam utilizadas corretamente.
Integração com Node.js e .NET Aspire
O .NET Aspire não só facilita a orquestração de aplicações.NET, mas também integra perfeitamente outras tecnologias como o Node.js. Essa flexibilidade permite que você construa aplicações distribuídas que combinam diferentes stacks tecnológicos de forma eficiente.
Orquestração no AppHost
Na orquestração realizada no AppHost, o .NET Aspire permite que você conecte diferentes componentes de sua aplicação, como um frontend em Node.js e uma API de backend, tudo isso de forma simples e clara.
var cache = builder.AddRedis(“cache”);
var weatherapi = builder.AddProject<Projects.AspireWithNode_AspNetCoreApi>(“weatherapi”);
var frontend = builder.AddNpmApp(“frontend”, “../NodeFrontend”, “watch”)
.WithReference(weatherapi)
.WithReference(cache)
.WithHttpEndpoint(env: “PORT”)
.WithExternalHttpEndpoints()
.PublishAsDockerFile();
Nesse exemplo, o cache é o Redis, o weatherapi é a API de previsão do tempo, e o Frontend é a aplicação Node.js. A função WithReference() conecta esses componentes, garantindo que o frontend tenha acesso tanto ao Redis quanto à API.
O uso de PublishAsDockerFile() permite que o Frontend seja empacotado como um contêiner Docker, facilitando a sua implantação em qualquer ambiente.
No código mostrado na segunda imagem, podemos ver como o AppHost é configurado:
Na Aplicação Node.js…
No exemplo mostrado nas imagens, a aplicação Node.js está configurada para recuperar o endereço do cache e a URL da API diretamente a partir do projeto .NET Aspire.
Isso é feito através de variáveis de ambiente que são geradas automaticamente com base nos recursos definidos no manifesto do Aspire.
const cacheAddress = env[‘ConnectionStrings__cache’];
const apiServer = env[‘services__weatherapi__https__0’] ?? env[‘services__weatherapi__http__0’];
Aqui, ConnectionStrings__cache e services__weatherapi são variáveis de ambiente que o Aspire injeta automaticamente no ambiente de execução da aplicação Node.js. Essas variáveis contêm os valores necessários para que a aplicação se conecte corretamente ao Redis e à API de previsão do tempo.
Com essas informações em mãos, a aplicação pode facilmente acessar o cache e a API, sem a necessidade de hard-coding de URLs ou strings de conexão. Isso não só facilita a manutenção do código como também garante que a aplicação funcione corretamente em diferentes ambientes (desenvolvimento, teste, produção).
Exemplo de Uso em uma Rota Express
Um exemplo de como essa configuração é utilizada em uma rota Express na aplicação Node.js pode ser visto a seguir:
app.get(‘/’, async (req, res) => {
let cachedForecasts = await cache.get(‘forecasts’);
if (cachedForecasts) {
res.render(‘index’, { forecasts: JSON.parse(cachedForecasts) });
return;
}
let response = await fetch(`${apiServer}/weatherforecast`);
let forecasts = await response.json();
await cache.set(‘forecasts’, JSON.stringify(forecasts));
res.render(‘index’, { forecasts });
});
Aqui, a aplicação tenta primeiro recuperar as previsões do tempo a partir do cache Redis. Se os dados estiverem no cache, eles são renderizados diretamente. Caso contrário, a aplicação faz uma requisição à API de previsão do tempo (apiServer), armazena os resultados no cache, e depois os exibe.
Essa lógica melhora significativamente a performance e a eficiência da aplicação, garantindo que os dados sejam recuperados rapidamente a partir do cache sempre que possível.
Conclusão
O .NET Aspire representa um avanço significativo na simplificação do desenvolvimento de aplicações distribuídas e prontas para a nuvem. Com sua capacidade de integrar diferentes tecnologias, como JavaScript e Node.js, ele oferece uma plataforma robusta e flexível para criar soluções modernas e eficientes. Se você deseja levar suas habilidades de desenvolvimento para o próximo nível, aproveite ao máximo o poder do .NET Aspire.
Para aprofundar ainda mais o seu conhecimento, recomendo fortemente que você assista à palestra do Chris Noring, onde ele explora detalhadamente as capacidades e a versatilidade do .NET Aspire. Esta é uma oportunidade imperdível para aprender diretamente com um dos especialistas que está na vanguarda do desenvolvimento de software.
Assista agora à palestra do Chris Noring: Palestra do Chris Noring no .NET Aspire Developers Day
Recursos Adicionais
Para continuar sua jornada no .NET Aspire, explore os seguintes recursos adicionais:
Documentação Oficial – .NET Aspire
Orchestrate Node.js apps in .NET Aspire
Code Sample: .NET Aspire with Angular, React, and Vue
Code Sample: .NET Aspire + Node.js
Curso Grátis: Criar aplicativos distribuídos com o .NET Aspire
Video series: Welcome to .NET Aspire
Espero que este artigo tenha sido útil e inspirador para você. Se tiver alguma dúvida ou sugestão, não hesite em compartilhar nos comentários abaixo. Estou aqui para ajudar e apoiar você em sua jornada de aprendizado e crescimento profissional.
Até a próxima e continue aprendendo, criando e compartilhando!
Microsoft Tech Community – Latest Blogs –Read More
Best way to merge non-profit onmicrosoft.com domain into existing Primary domain
We have an existing Entra tenant (ABCoriginal.net) configured and secured with 100 users. Our NFP was approved with domain AlaskaBCoriginal.onmicrosoft.com. Want to combine the 2 so I can buy NFP lic for ABCoriginal.net and keep all of the users and configurations.
What steps are needed to get this done, and once complete will the Partner relationship with Tech Soup transfer into existing domain?
We have an existing Entra tenant (ABCoriginal.net) configured and secured with 100 users. Our NFP was approved with domain AlaskaBCoriginal.onmicrosoft.com. Want to combine the 2 so I can buy NFP lic for ABCoriginal.net and keep all of the users and configurations.What steps are needed to get this done, and once complete will the Partner relationship with Tech Soup transfer into existing domain? Read More
Logic APP connecting to AOAG Readonly
Hi Everyone,
I have an Always on availability group with the secondary read-only server configured for read only intent.
I noticed that there is no where in logic app where the readonly application intent can be configured as additional parameter.
Am I missing something or it is just the way logic APP works. I have been able to connect ADF successfully. Please, can someone advise.
Hi Everyone, I have an Always on availability group with the secondary read-only server configured for read only intent.I noticed that there is no where in logic app where the readonly application intent can be configured as additional parameter. Am I missing something or it is just the way logic APP works. I have been able to connect ADF successfully. Please, can someone advise. Read More
=IF formula to fill a referenced cell
Hello All,
I am trying to make a fillable worksheet for internal use within our office. I have a cell (B3) that I want to hold placeholder text that tells the user what to put into it. I’m hoping I can make a formula to fill the cell with a text prompt when left blank. In essence this is the “If/Then” statement:
IF REFERENCED CELL IS BLANK THEN FILL REFERENCED CELL WITH “NAME”
My thinking was to write an =IF combined with =ISBLANK formula in a separate cell (i.e. G30) that references B3, and then fills B3 with “Name” if it is blank. The formula works but it understandable is filling G30) with “Name”. Does anyone have any ideas of how I can have the formula fill the cell it is referencing?
Additionally, I’m also anticipating a logic error with this. If I have a formula checking to see if a cell is blank, and then filling that cell, it won’t be blank. Is that a problem?
Hello All, I am trying to make a fillable worksheet for internal use within our office. I have a cell (B3) that I want to hold placeholder text that tells the user what to put into it. I’m hoping I can make a formula to fill the cell with a text prompt when left blank. In essence this is the “If/Then” statement: IF REFERENCED CELL IS BLANK THEN FILL REFERENCED CELL WITH “NAME” My thinking was to write an =IF combined with =ISBLANK formula in a separate cell (i.e. G30) that references B3, and then fills B3 with “Name” if it is blank. The formula works but it understandable is filling G30) with “Name”. Does anyone have any ideas of how I can have the formula fill the cell it is referencing? Additionally, I’m also anticipating a logic error with this. If I have a formula checking to see if a cell is blank, and then filling that cell, it won’t be blank. Is that a problem? Read More
New Outlook not adhering to PersonalAccountsEnabled unless I revert to Classic outlook
We implemented the PersonalAccountsEnabled = $false in out OwaMailboxPolicy, as below
Set-OwaMailboxPolicy -PersonalAccountsEnabled $false -identity OwaMailboxPolicy-Default
I waited overnight and accessed a computer using my account and continue to be able to add Gmail and other personal accounts to my corporate New Outlook. I also removed my corporate account and readded it to New Outlook , restarted and same behavior of being able to add Gmail and other personal accounts.
I reverted back to Classic Outlook and restarted Classic Outlook and enable New Outlook and the policy worked and prevented me from adding Gmail and other personal accounts.
Is this the correct behaviors as I would expect the setting to take affect without having to revert to the Classic Outlook. It seems as if New Outlook is not reading the policy except on the initial movement from Class to New.
We implemented the PersonalAccountsEnabled = $false in out OwaMailboxPolicy, as below Set-OwaMailboxPolicy -PersonalAccountsEnabled $false -identity OwaMailboxPolicy-Default I waited overnight and accessed a computer using my account and continue to be able to add Gmail and other personal accounts to my corporate New Outlook. I also removed my corporate account and readded it to New Outlook , restarted and same behavior of being able to add Gmail and other personal accounts. I reverted back to Classic Outlook and restarted Classic Outlook and enable New Outlook and the policy worked and prevented me from adding Gmail and other personal accounts. Is this the correct behaviors as I would expect the setting to take affect without having to revert to the Classic Outlook. It seems as if New Outlook is not reading the policy except on the initial movement from Class to New. Read More
Session controlled Microsoft apps very slow response
Hello
For the past 2 months we have been receiving complaints regarding D365 slowness off and on. D365 was included in my session controlled policy. I disabled the policy and the complaints have stopped. Is there part of the policy setup that was missed. I really need the benefits of MCAS without impacting the business. Thanks
Hello For the past 2 months we have been receiving complaints regarding D365 slowness off and on. D365 was included in my session controlled policy. I disabled the policy and the complaints have stopped. Is there part of the policy setup that was missed. I really need the benefits of MCAS without impacting the business. Thanks Read More
ZAP/Post-delivery reporting for Teams, Sharepoint & OneDrive
It seems that the email & collaboration report for ‘post-delivery activities’ only covers ZAP activity for emails. While in other E&C reports, a pivot by workload is supported, this doesn’t seem to be the case.
Are there ZAP/Post-delivery reports available for Teams, SPO & ODB?
It seems that the email & collaboration report for ‘post-delivery activities’ only covers ZAP activity for emails. While in other E&C reports, a pivot by workload is supported, this doesn’t seem to be the case. Are there ZAP/Post-delivery reports available for Teams, SPO & ODB? Read More
simplexseed.com and Outlook junk rules
I’ve tried to create rules to send all messages from @simplexseed.com to the deleted folder and while it seems to work when I run the rule, it doesn’t catch items and send them directly to trash before I sort through my junk mail. There are numerous versions of whatever they are “selling” so I can’t simply exclude things like “gutter guard” or “metal roofing.”
What am I missing?
This is in Outlook desktop – the classic version.
Thanks for the help.
I’ve tried to create rules to send all messages from @simplexseed.com to the deleted folder and while it seems to work when I run the rule, it doesn’t catch items and send them directly to trash before I sort through my junk mail. There are numerous versions of whatever they are “selling” so I can’t simply exclude things like “gutter guard” or “metal roofing.” What am I missing?This is in Outlook desktop – the classic version. Thanks for the help. Read More
Recover Tech Community Account
Is it possible to recover an account that I previously had, which is likely associated to my previous work?
Is it possible to recover an account that I previously had, which is likely associated to my previous work? Read More
Outlook proofing tools not working in French
hello
I’m trying to setup French language check in my emails and for some reason in outlook i add the French language restart outlook but when i type email it doesn’t check my French language
Thanks
hello I’m trying to setup French language check in my emails and for some reason in outlook i add the French language restart outlook but when i type email it doesn’t check my French language Thanks Read More
Step by Step: Integrate Advanced CSV RAG Service with your own data into Copilot Studio
This post is going to explain how to use Advanced RAG Service easily verify proper RAG tech performance for your own data, and integrate it as a service endpoint into Copilot Studio.
This time we use CSV as a sample. CSV is text structure data, when we use basic RAG to process a multiple pages CSV file as Vector Index and perform similarity search using Nature Language on it, the grounded data is always chunked and hardly make LLM to understand the whole data picture.
For example, if we have 10,000 rows in a CSV file, when we ask “how many rows does the data contain and what’s the mean value of the visits column”, usually general semantic search service cannot give exact right answers if it just handles the data as unstructured. We need to use different advanced RAG method to handle the CSV data here.
Thanks to LLamaIndex Pandas Query Engine, which provides a good idea of understanding data frame data through natural language way. However to verify its performance among others and integrate to existing Enterprise environment, such as Copilot Studio or other user facing services, it definitely needs AI service developing experience and takes certain learning curve and time efforts from POC to Production.
Advanced RAG Service supports 6 latest advanced indexing techs including CSV Query Eninge, with it developers can leverage it to shorten development POC stage, and achieve Production purpose. Here is detail step to step guideline:
text-embedding-3-small
a. In Docker environment, run this command to clone the dockerfile and related config sample:
b. In the AdvancedRAG folder, rename .env.sample to .env
mv .env.sample .env
c. In the .env file, configure necessary environment variables. In this tutorial, let’s configure:
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_Deployment=gpt-4o-mini
AZURE_OPENAI_EMBEDDING_Deployment=text-embedding-3-small
AZURE_OPENAI_ENDPOINT=https://[name].openai.azure.com/
# Azure Document Intellenigence
DOC_AI_BASE=https://[name].cognitiveservices.azure.com/
DOC_AI_KEY=
NOTE:
d. Build your own docker image:
e. Run this docker:
f. Access http://localhost:8000/
a. Click the CSV Query Engine tab, upload a test CSV file, click Submit
b. Click the Chat Mode tab, now we can use Natural Language to test how good the CSV Query Engine at understanding CSV content:
The Advanced RAG Service is built with Gradio and FAST API. It opens necessary API Endpoints by default. We can turn off any of them in the .env settings.
The Chat endpoint can be used for different index types query/search. Since we are using “CSV Query Engine”, now it is:
content-type: application/json
{
“data”: [
“how many records does it have”,
“”,
“CSV Query Engine”,
“/tmp/gradio/86262b8036b56db1a2ed40087bbc772f619d0df4/titanic_train.csv”,
“You are a friendly AI Assistant” ,
false
]
}
The response is:
“data”: [
“The dataset contains a total of 891 records. If you have any more questions about the data, feel free to ask!”,
null
],
“is_generating”: true,
“duration”: 3.148253917694092,
“average_duration”: 3.148253917694092,
“render_config”: null,
“changed_state_ids”: []
}
Using this method, we can easily integrate the specific RAG capability to our own service, such as Copilot Studio. Before that, let’s publish the service first.
We have different methods to release docker as an app service. Here are the generate steps when we use Azure Contain Registry and Azure Container App.
a. Create Azure Container Registry resource [ACRNAME], upload your tested docker image to it. The command is:
az account set -s [your subscription]
az acr login -n [ACRNAME]
docker push [ACRNAME].azurecr.io/dockerimage:tag
b. Create an Azure Container App, deploy this docker image, and deploy it. Don’t forget enable Session Affinity for the Container App.
To automate the Azure Container App deployment, I provided deploy_acr_app.sh in the repo.
set -e
if [ $# -eq 0 ]
then
echo “No SUF_FIX supplied, it should be an integer or a short string”
docker image list
exit 1
fi
SUF_FIX=$1
RESOURCE_GROUP=”rg-demo-${SUF_FIX}”
LOCATION=”eastus”
ENVIRONMENT=”env-demo-containerapps”
API_NAME=”advrag-demo-${SUF_FIX}”
FRONTEND_NAME=”advrag-ui-${SUF_FIX}”
TARGET_PORT=8000
ACR_NAME=”advragdemo${SUF_FIX}”
az group create –name $RESOURCE_GROUP –location “$LOCATION”
az acr create –resource-group $RESOURCE_GROUP –name $ACR_NAME –sku Basic –admin-enabled true
az acr build –registry $ACR_NAME –image $API_NAME .
az containerapp env create –name $ENVIRONMENT –resource-group $RESOURCE_GROUP –location “$LOCATION”
az containerapp create –name $API_NAME –resource-group $RESOURCE_GROUP –environment $ENVIRONMENT –image $ACR_NAME.azurecr.io/$API_NAME –target-port $TARGET_PORT –ingress external –registry-server $ACR_NAME.azurecr.io –query properties.configuration.ingress.fqdn
az containerapp ingress sticky-sessions set -n $API_NAME -g $RESOURCE_GROUP –affinity sticky
To use it:
./deploy_acr_azure.sh [suffix number]
Note: for more details about this sh, can refer to this guideline.
After around 7~8 minutes, the Azure Container App will be ready. You can check the output and access it directly:
To protect your container app, can follow this guide to enable authentication on it.
Enable authentication and authorization in Azure Container Apps with Microsoft Entra ID
By default, we need to upload a CSV to the AdvRAG service before analysis. The service always saves the uploaded file to its local temp folder on server side. And then we can use temp file path to start the analysis query.
To skip this step, we can save common files in subfolder rules of the AdvancedRAG folder, and then build your docker image. The files will be copy to the docker itself. As a demo, I can put a CSV file in AdvancedRAG/rules/files, and then pubish the docker to Azure.
a. Open Copilot Studio, create a new Topic, use “CSV Query” to trigger it.
b. For demo purpose, I upload a test CSV file and got its path, then put it into a variable:
c. Now let’s add a Question step to ask what question the user want to ask:
d. Click “+”, “Add an Action”, “Create a flow”. We will use this new flow to call AdvancedRAG service endpoint.
e. We need Query, File_Path, System_Message as input variables.
e. In the flow Editor, let’s add an HTTP step. In the step, post the request to the AdvancedRAG endpoint as below:
Save the flow as ADVRAGSVC_CSV, and publish it.
f. Back to Copilot Studio topic, we will add the action as below, and set input variables as need:
g. Publish and open this Custom Copilot in Teams Channel based on this guide.
h. Now we can test this topic lit this, as we see, even I used gpt-4o-mini here, the response accuracy is very good:
From above, it shows how to quickly verify potential useful RAG techs (Pandas Query Engine) in the AdvancedRAG service studio, expose and publish it as REST API endpoint which can be used by other service, such as Copilot Studio.
The overall process can be applied to Knowledge Graph, GraphRAG, Tree Mode Summary and other type indexes with this AdvnacedRAG service. In this way developers can efficiently move from proof of concept to production, leveraging advanced RAG capabilities in their own services.
The AdvancedRAG service focuses on key logic and stability of different important index types, the efficiency to be landed into M365 AI use cases. For any feature improvement ideas, feel free to visit below repos to create issues, fork projects and create PRs.
Docker Deploy Repo: https://github.com/freistli/AdvancedRAG
Source Code Repo: https://github.com/freistli/AdvancedRAG_SVC
Exploring the Advanced RAG (Retrieval Augmented Generation) Service
Microsoft Tech Community – Latest Blogs –Read More
MVP’s Favorite Content: Microsoft Teams and DevOps
In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.
Onyinye Madubuko, M365 MVP, Ireland
Clear Teams cache – Microsoft Teams | Microsoft Learn
“This was helpful in solving new Teams application for users experiencing issues.”
*Relevant Blog: Teams Window keeps flickering and not launching (techiejournals.com)
Laurent Carlier, M365 MVP, France
Overview of meetings, webinars, and town halls – Microsoft Teams | Microsoft Learn
“Teams meetings have evolved significantly over the past few years, with the end of live Team events, the introduction of Town Halls, and the strengthening of Teams Premium features. It’s not always easy to understand what is and isn’t included in Teams Premium licences, or to explain the benefits of purchasing this new plan. This documentation and its comparison tables make my job a lot easier today.”
Edward Kuo, Microsoft Azure MVP, Taiwan
Introduction to Azure DevOps – Training | Microsoft Learn
“I am a DevOps expert and an Azure specialist, primarily responsible for guiding enterprises in using Azure DevOps and establishing DevOps teams.”
*Relevant Blog: DevOps – EK.Technology Learn (edwardkuo.dev)
Kazushi Kamegawa, Developer Technologies MVP, Japan
Managed DevOps Pools – The Origin Story – Engineering@Microsoft
“Using Azure Pipelines for CI/CD in a closed network environment requires the use of self-hosted agents, and managing these images was a very labor-intensive task. Even with automation, updates took 5-6 hours and had to be done once or twice a month. It was probably a challenge for everyone.
In this context, the announcement of the Managed DevOps Pools on this blog was very welcome news. It’s not just me; it’s likely the solution everyone was hoping for, and I am very much looking forward to it.”
(In Japanese: Azure Pipelinesを使って閉域環境でのCI/CDはセルフホストエージェントを使わなければならない上に、イメージの管理は非常に大変な作業でした。更新作業には自動化していても5-6時間かかる上に、月に1-2度は行わなくてはなりません。おそらく皆さん大変だったでしょう。
そんな中、Managed DevOps Poolのアナウンスが本ブログで行われました。私だけではなく、おそらく皆さんが望んだソリューションであり、大変期待しています。)
*Relevant event: Azure DevOpsオンライン Vol.11 ~ Managed DevOps Pool解説 – connpass
Microsoft Tech Community – Latest Blogs –Read More