Category: Microsoft
Category Archives: Microsoft
How to Enlarge Font Size in the File Save Dialogue Box
How can I adjust the text size in the Save As dialog box, specifically for the File Name input?
I attempted to use a tool called FontSizeSelector, but did not locate an option to modify the font size for the File Name field.
How can I adjust the text size in the Save As dialog box, specifically for the File Name input? I attempted to use a tool called FontSizeSelector, but did not locate an option to modify the font size for the File Name field. Read More
Desktop Icons Missing: “File Not Found – iconcache_*.db” and OneDrive Error 0x8004de44
There seems to be an issue with loading desktop icons and accessing OneDrive on a specific user account. Upon attempting to log in to OneDrive, error code 0x8004de44 is encountered. When using the command prompt:
C:WindowsSystem32>cd /d %userprofile%AppDataLocalMicrosoftWindowsExplorer
C:Usersiut04AppDataLocalMicrosoftWindowsExplorer>attrib -h iconcache_*.db
The command returns “File not found – iconcache_*.db,” indicating that the icon cache file is missing.
There seems to be an issue with loading desktop icons and accessing OneDrive on a specific user account. Upon attempting to log in to OneDrive, error code 0x8004de44 is encountered. When using the command prompt: C:WindowsSystem32>cd /d %userprofile%AppDataLocalMicrosoftWindowsExplorer C:Usersiut04AppDataLocalMicrosoftWindowsExplorer>attrib -h iconcache_*.db The command returns “File not found – iconcache_*.db,” indicating that the icon cache file is missing. Read More
Generating Random Characters
Greetings,
I hope this message finds you well. I am seeking assistance with a perplexing issue I have encountered on my modern HP Pavilion desktop computer.
In recent weeks, I have observed an unusual occurrence where an unidentified character, consistently in the shape of a rectangular bracket, appears unbidden and begins populating text boxes across various programs. This phenomenon transpires intermittently, seemingly at random intervals, and persists despite my attempts to mitigate it through the use of different keyboards, both wired and wireless.
My mouse, which operates wirelessly using what I believe to be Bluetooth technology, functions without issue during these episodes. The situation only arises sporadically, and when it does, it disrupts the normal functioning of my desktop.
I am uncertain whether this anomaly is attributable to electrical disturbances, interface irregularities, or perhaps an underlying hardware fault within the computer system. Any insights or recommendations you could provide would be greatly valued.
Thank you for your time and consideration.
Greetings, I hope this message finds you well. I am seeking assistance with a perplexing issue I have encountered on my modern HP Pavilion desktop computer. In recent weeks, I have observed an unusual occurrence where an unidentified character, consistently in the shape of a rectangular bracket, appears unbidden and begins populating text boxes across various programs. This phenomenon transpires intermittently, seemingly at random intervals, and persists despite my attempts to mitigate it through the use of different keyboards, both wired and wireless. My mouse, which operates wirelessly using what I believe to be Bluetooth technology, functions without issue during these episodes. The situation only arises sporadically, and when it does, it disrupts the normal functioning of my desktop. I am uncertain whether this anomaly is attributable to electrical disturbances, interface irregularities, or perhaps an underlying hardware fault within the computer system. Any insights or recommendations you could provide would be greatly valued. Thank you for your time and consideration. Read More
Actionable Messages Development
Does anybody know how I can get support from Microsoft regarding Adaptive Card development? I have sent several emails to the address that is listed in the Learn documentation and haven‘t received any response back. I am trying to obtain an Originator ID, but the development portal opens to a 500 error.
Does anybody know how I can get support from Microsoft regarding Adaptive Card development? I have sent several emails to the address that is listed in the Learn documentation and haven’t received any response back. I am trying to obtain an Originator ID, but the development portal opens to a 500 error. Read More
changed keys
I have changed the keys, Alt = Windows, Windows = Alt, how to fix? help pls.
I have changed the keys, Alt = Windows, Windows = Alt, how to fix? help pls. Read More
Exchange connector Autodisover not set but SCSM try always use it
Dears,
when I try to set Exchange connector 4.1 in SCSM2019, with correct tenant and client, an error message appear:
Exchange Connector: Unable to validate credentials, please refer to the event logs for more information (Error Type=AggregateException, Message=One or more errors occurred.)
Additional Details:
System.AggregateException: One or more errors occurred. —> System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The remote name could not be resolved: ‘autodiscover.domain’
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
— End of inner exception stack trace —
I didn´t checked autodiscover option so why this issue appear? For security reasons I changed domain name so the issue is not there.
Dears, when I try to set Exchange connector 4.1 in SCSM2019, with correct tenant and client, an error message appear:Exchange Connector: Unable to validate credentials, please refer to the event logs for more information (Error Type=AggregateException, Message=One or more errors occurred.)Additional Details:System.AggregateException: One or more errors occurred. —> System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The remote name could not be resolved: ‘autodiscover.domain’at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)— End of inner exception stack trace — I didn´t checked autodiscover option so why this issue appear? For security reasons I changed domain name so the issue is not there. Read More
Project for the Web – how to prevent Assign Project Task to Contact or Account from Power App?
Hi, our customer has noticed that if he wants to assign a task to a colleague and he starts typing e.g. the first 3 letters of the name, the whisperer offers not only user names from Entra ID but also Contacts and Account names from the connected Power Application Project. How to prevent this? It is not desirable, project tasks are only assigned within the organization.
Thank you for help, Jan
Hi, our customer has noticed that if he wants to assign a task to a colleague and he starts typing e.g. the first 3 letters of the name, the whisperer offers not only user names from Entra ID but also Contacts and Account names from the connected Power Application Project. How to prevent this? It is not desirable, project tasks are only assigned within the organization.Thank you for help, Jan Read More
Main Row and Split
Hi,
I have a problem. I cannot see the row from A-…. or the split from 1-… Do someone know where can I show this row and split.
Hi, I have a problem. I cannot see the row from A-…. or the split from 1-… Do someone know where can I show this row and split. Read More
Timesheet calculations – help please
Going round in circles and finally admitting defeat. Photo of the data set I am currently working with – this relates to hours worked for staff.
What I need to do in the unsocial hours column (Col O) is account for any hours worked between midnight (23:59) and 6am (06:00). E.g. for the first Row O2 should read 02:20 for 2hrs 20mins.
First thing i’m stuck on is i have used =TEXT(H2, “HH:MM:SS”) in columns M and N to convert the date/time in H and I into time only. However – this isn’t overwriting the original date in the underlying data, so every combination of IF statements that I try is giving false results.
I’m sure there is a very simple and straightforward answer, my brain is just fried. Thanks very much.
Going round in circles and finally admitting defeat. Photo of the data set I am currently working with – this relates to hours worked for staff. What I need to do in the unsocial hours column (Col O) is account for any hours worked between midnight (23:59) and 6am (06:00). E.g. for the first Row O2 should read 02:20 for 2hrs 20mins. First thing i’m stuck on is i have used =TEXT(H2, “HH:MM:SS”) in columns M and N to convert the date/time in H and I into time only. However – this isn’t overwriting the original date in the underlying data, so every combination of IF statements that I try is giving false results. I’m sure there is a very simple and straightforward answer, my brain is just fried. Thanks very much. Read More
Kerberos-Key-Distribution-Center error id 42 krbtg reset procedure
Hi Guys
In a new company i’m experiencing this error on PDC , so I decided to go through the procedure of resetting the kerberos account password.
i know that i have to reset it and wait for replication in all domain controllers, and i know that microsoft suggest to wait for at least 10 hours to reset again krbtg password.
But network is composed by 7 DC located out in branch office and many services are related to AD as you can image and in a enviorment like this i can’t be sure to have a rapid back out plan.
Someone has experiece on this procedure? what are the conseguences on connetted users and services?
Thanks in advance
AP
Hi Guys In a new company i’m experiencing this error on PDC , so I decided to go through the procedure of resetting the kerberos account password.i know that i have to reset it and wait for replication in all domain controllers, and i know that microsoft suggest to wait for at least 10 hours to reset again krbtg password. But network is composed by 7 DC located out in branch office and many services are related to AD as you can image and in a enviorment like this i can’t be sure to have a rapid back out plan. Someone has experiece on this procedure? what are the conseguences on connetted users and services? Thanks in advance AP Read More
Recovery options for Azure Virtual Desktop session host VMs
Last week an update issue caused unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, including some Azure Virtual Desktop session host virtual machines (VMs). CrowdStrike has released a public statement addressing the matter that includes recommended steps for a workaround. Microsoft also released guidance for resolving the issue for Azure VMs, which detailed restoring from a backup created prior to the update and OS disk repairs.
For Azure Virtual Desktop session host VMs that have been impacted, there are several recovery options. First, we recommend reviewing the recovery options for Azure virtual machines should they be applicable to your environment.
For Azure Virtual Desktop session host VMs specifically, if you are using FSLogix to maintain the user profile separate from the VMs:
Deploy a new host pool, or add new session hosts to an existing host pool
Use the existing FSLogix configuration that would enable user profiles and states to be consumed from these new VMs, which are themselves unaffected by the specific CrowdStrike version that has caused the issue.
You can then, optionally, delete the impacted session host VMs at a time of your choosing.
FSLogix redirects the user profile to a virtual hard disk (VHD) that is stored separately from the VM on a storage service located within Azure. When a user signs in to their session host, their user profile VHD is mounted onto the VM and the user profile is loaded into the session. The user experience is therefore maintained on the new session host, enabling the user to be productive. No user profile data is stored in the VM local disk.
If you used an existing image to create your session hosts, this image should be used so that any applications or configurations that pre-configured within the image are immediately available to users. You can alternatively use the Azure Marketplace to select any supported Windows image. You would then apply any existing policies via Active Directory Group Policy or Microsoft Intune policies, as well as install any software packages via your software distribution tool.
For personal host pools using FSLogix, while FSLogix will return the user profile and the user experience to a new session host, any data stored manually on the local drive(s) or bespoke software installations will be lost. Data can be restored; however, by mounting the impacted VM OS disk to another virtual machine and manually copying the data.
Further information on FSLogix is available in our FXLogix documentation.
Microsoft Tech Community – Latest Blogs –Read More
Azure AI Search: Nativity in Microsoft Fabric
How to create an AI Web App with Azure OpenAI, Azure AI Search with Vector Embeddings and Microsoft Fabric Pipelines
Intro
Today, we embark on an exciting journey to build an AI Assistant and Recommendations bot with cutting-edge features, helping users decide which Book is best suitable for their preferences. Our bot will handle various interactions, such as, providing customized recommendations, and engaging in chat conversations. Additionally, users can register and log in to this Azure Cloud-native AI application. Microsoft Fabric will handle, automation and AI related tasks such as:
Load and clean the books Dataset with triggered Pipelines and NotebooksTransform the Dataset to JSON and making proper adjustments for Vector usabilityLoad the cleaned and transformed Dataset to Azure AI Search and configuring Vector and Semantic profilesCreate and save embeddings with Azure OpenAI to Azure AI Search
As you may already guessed our foundation lies in Microsoft Fabric, leveraging its powerful Python Notebooks, Pipelines, and Datalake toolsets. We’ll integrate these tools with a custom Identity Database and an AI Assistant. Our mission? To explore the core AI functionalities that set modern applications apart—think embeddings, semantic kernel, and vectors. As we navigate Microsoft Azure’s vast offerings, we’ll build our solution from scratch..
Prerequisites for Workshop
Apart from this guide, everything will be shared through GitHub; nevertheless we need:
Azure Subscription, access to Azure OpenAI with text-embeddings ad chat-gpt deployments, Microsoft Fabric with a Pro license (trial is fine), patience and excitement!
Infrastructure
I do respect everyone’s time and i am going to point you to the Git Hub repo that holds the whole implementation, along with Terraform automation. We will start with the SQL query that is running within terraform. The query needs the following code:
CREATE TABLE Users (
UserId INT IDENTITY(1,1) PRIMARY KEY,
FirstName NVARCHAR(50) NOT NULL,
LastName NVARCHAR(50) NOT NULL,
Username NVARCHAR(50) UNIQUE NOT NULL,
PasswordHash NVARCHAR(255) NOT NULL,
Age INT NOT NULL,
photoUrl NVARCHAR(500) NOT NULL
);
— Genres table
CREATE TABLE Genres (
GenreId INT PRIMARY KEY IDENTITY(1,1),
GenreName NVARCHAR(50)
);
— UsersGenres join table
CREATE TABLE UsersGenres (
UserId INT,
GenreId INT,
FOREIGN KEY (UserId) REFERENCES Users(UserId),
FOREIGN KEY (GenreId) REFERENCES Genres(GenreId)
);
ALTER DATABASE usersdb01
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
We have enabled Change Tracking in case we wan to trigger the Embeddings creation upon each change on the Database.
You can see we are using a JOIN statement to handle users and genres since the various genres selected by the users will help the assistant to make recommendations. We are also enabling Change Tracking so we can trigger updates for the Vector once a change is made. Keep in mind you need the sqlcmd installed on your Workstation !
Vector, Embeddings & Fabric Pipelines
Yes you read it well! We are going to get a Books Dataset from Kaggle, clean it , transform it and upload it to AI Search, where we will create an index for the books. We will also create and store the embeddings using Vector Profile from AI Search. In a similar manner we will get the Users from SQL and upload them to AI Search users index, create the embeddings and save them as well. The real exciting stuff is that we will use Microsoft Fabric Pipelines and Notebooks for the books index and embeddings ! So it is important to have a Fabric Pro Trial License with the minimum capacity enabled.
Books Dataset
The ultimate purpose here is to achieve automation for the creation of embeddings for both Books and Users datasets, so on the Web App we can get recommendations based on preferences but also on actual queries we set to the AI Assistant. We will get a main books dataset as Delimited Text (CSV) and transform it to JSON with correct format so it can be uploaded to Azure AI Search index, utilizing the native AI Search vector profiles and Azure OpenAI for the embeddings. The Fabric Pipelines will be triggered on schedule and we will explore other possible ways.
In Microsoft Fabric, Notebooks are an important tool as in most modern Data Platforms. The managed Spark Clusters allows us to create and execute powerful scripts in the form of Python Notebooks (PySpark), add them in a Pipeline and build solid Projects and Solutions. Microsoft Fabric provides the ability to pre install libraries and configure our Spark Compute within Environments, so our code will have all requirements in this managed environment. In our case we will install all required libraries and also pin the OpenAI version to pre 1.0.0 for this project. But let’s take it from the start. We need to access app.fabric.microsoft.com and create a new Workspace with a Trial Pro License. It should look like this and also has the diamond icon:
Once we have our Workspace in place we can select it and from the left menu select New and create the Environment and later a Lakehouse.
The Environment settings that worked for me are the following, you can see that we just install Public Libraries:
Fabric Environment: OpenAI Pinning
Since all the code will be available on GitHub i prefer to explore the next task, create the Pipeline, which will contain the Notebooks. Select your Workspace icon on the left vertical menu, find the NEW+ drop-down menu and More Options until you find the Data Pipeline. You will be presented with the familiar SynapseData Factory dashboard (quite similar) where we can start inserting our activities. You have to create all Notebooks before hands just to keep everything in order. So based on the GitHub we will have 5 Notebooks ready. The Fabric API does not support yet firing pipelines, it will happen eventually, so can either schedule or work with Event Stream. The Reflex supports same Directory Azure Connections only ( We will have a look another time), but our Subscription is on another Tenant so yeah! Schedule it is !
The Pipeline has the following activities:
Let’s shed some light ! We assume that the Dataset is stored in Blob Storage Account so we get that CSV into the Lakehouse. First Notebook is cleaning the data with Python, remove nulls, remove non-English characters and so on. Since the activity stores it as part of a Folder-like structure with non-direct access we need a task to save it on our Lakehouse. We then transform to JSON, make the JSON a correct array set of records, again save it to Lakehouse and the last 2 Notebooks are creating the AI Search Index, uploading the JSON to AI Search, configure the AI Search with vector and semantic profiles and get all records to create embeddings from Azure OpenAI and store those back to AI Search. Due to the great number of Documents we apply rate-limit evasion (back-off) and you can be sure this will take almost 30 minutes to conclude for around 9500 records.
Users Dataset
Most of the workflow is similar for the users index and embeddings. The difference is that our users are stored and updated with new ones, in an Azure SQL Database. Since we utilize pipelines, Microsoft Fabric natively connects to Azure SQL and in fact our activity is a Copy Task but we have a query to bring SQL data.
SELECT u.UserId, u.Age, STRING_AGG(g.GenreName, ‘,’) AS Genres
FROM Users u
JOIN UsersGenres ug ON u.UserId = ug.UserId
JOIN Genres g ON ug.GenreId = g.GenreId
GROUP BY u.UserId, u.Age
This SQL query is selecting data from three related tables: Users, UsersGenres, and Genres. Specifically, it’s returning a list of users (based on their UserId and Age) along with a comma-separated list of all the genres associated with each user. The STRING_AGG function is used to concatenate the GenreName into a single string, separated by commas. The JOIN operations are used to link the tables together based on common fields – in this case, the UserId in the Users and UsersGenres tables, and the GenreId in the UsersGenres and Genres tables. The GROUP BY clause is grouping the results by both UserId and Age, meaning that each row in the output will represent a unique combination of these two fields.
So it is a simpler process after all, and due to the small amount of users ( i can only subscribe up to 5-6 imaginary accounts ! ), it is a quicker process.
So what have we done so far ? Well let’s break it down, shall we ?
Process
Created the main Infrastructure using Terraform – available on GitHubThe Infra provides a Web UI where we register as users and select favorite book Genres, and can login into a Dashboard that we have access to an AI Assistant. The database used to store User’s info is Azure SQL. The Infrastructure consists also of Azure Key Vault, Azure Container Registry, Azure AI Search and Azure Web Apps. A separate Azure OpenAI is already in place.The backend creates a Join Table to store UserId with Genres so later it will be easier to create personalized recommendationsWe got a Books dataset with [id, Author, Title, Genres, Rating] fields and upload it to Azure Blob StorageWe activated Trial (or just have available) license for Microsoft Fabric capacityWe created Jupyter Notebooks to clean the source books dataset, transform it and store it as JSONWe created a Fabric Pipeline integrating these Notebooks and new ones that create a books-index in Azure AI Search, configure it with Vector and Semantic Profiles and uploaded all JSON records in itThe Pipeline continues with additional Notebooks that create embeddings with Azure OpenAI and store this embeddings back in Azure AI Search.A new Pipeline has been deployed, that gets the Users data with a query that combines the Genres information with Users from the Azure SQL Database resource and stores it as JSONThe users Pipeline creates and configures a new users-index in Azure AI Search, configures Vector and Semantic profiles and creates embeddings, for all data, with Azure OpenAI and stores the embeddings back to the index.
Now we are left with the Backend details and maybe some minor changes for the Frontend. As you will see the GitHub repo contains all required files to create a Docker Image, push it to Container Registry and create a Web App in Azure Web Apps. Use: [ docker build -t backend . ] and tag and push: [ docker tag backend {acrname}.azurecr.io/backend:v1 ] , [ docker push {acrname}.azurecr.io/backend:v1 ]. We will be able to see our new Repo on Azure Container Registry and deploy our new Web App :
Don’t forget to add * in CORS settings for the backend Web App!
The overall Architecture is like this:
The only variable needed for the Backend Web App is the KeyVault name and the User Assigned Managed Identity ID. All access to other services (SQL, Storage Account, Ai Search, Azure OpenAI) is going through Key Vault Secrets.
Let’s have a quick look on our Backend
import dotenv from ‘dotenv’;
import express from ‘express’;
import sql from ‘mssql’;
import bcrypt from ‘bcrypt’;
import jwt from ‘jsonwebtoken’;
import multer from ‘multer’;
import azureStorage from ‘azure-storage’;
import getStream from ‘into-stream’;
import cors from ‘cors’;
import { SecretClient } from “@azure/keyvault-secrets”;
import { DefaultAzureCredential } from “@azure/identity”;
import { OpenAIClient, AzureKeyCredential } from ‘@azure/openai’;
import { SearchClient } from ‘@azure/search-documents’;
import bodyParser from ‘body-parser’;
dotenv.config();
const app = express();
app.use(cors({ origin: ‘*’ }));
app.use((req, res, next) => {
res.setHeader(‘X-Content-Type-Options’, ‘nosniff’);
next();
});
app.use(express.json());
// set up rate limiter: maximum of five requests per minute
var RateLimit = require(‘express-rate-limit’);
var limiter = RateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // max 100 requests per windowMs
});
// apply rate limiter to all requests
app.use(limiter);
app.get(‘/:path’, function(req, res) {
let path = req.params.path;
if (isValidPath(path))
res.sendFile(path);
});
const vaultName = process.env.AZURE_KEY_VAULT_NAME;
const vaultUrl = `https://${vaultName}.vault.azure.net`;
const credential = new DefaultAzureCredential({
managedIdentityClientId: process.env.MANAGED_IDENTITY_CLIENT_ID, // Use environment variable for managed identity client ID
});
const secretClient = new SecretClient(vaultUrl, credential);
async function getSecret(secretName) {
const secret = await secretClient.getSecret(secretName);
return secret.value;
}
const inMemoryStorage = multer.memoryStorage();
const uploadStrategy = multer({ storage: inMemoryStorage }).single(‘photo’);
let sqlConfig;
let storageAccountName;
let azureStorageConnectionString;
let jwtSecret;
let searchEndpoint;
let searchApiKey;
let openaiEndpoint;
let openaiApiKey;
async function initializeApp() {
sqlConfig = {
user: await getSecret(“sql-admin-username”),
password: await getSecret(“sql-admin-password”),
database: await getSecret(“sql-database-name”),
server: await getSecret(“sql-server-name”),
options: {
encrypt: true,
trustServerCertificate: false
}
};
storageAccountName = await getSecret(“storage-account-name”);
azureStorageConnectionString = await getSecret(“storage-account-connection-string”);
jwtSecret = await getSecret(“jwt-secret”);
searchEndpoint = await getSecret(“search-endpoint”);
searchApiKey = await getSecret(“search-apikey”);
openaiEndpoint = await getSecret(“openai-endpoint”);
openaiApiKey = await getSecret(“openai-apikey”);
//console.log(“SQL Config:”, sqlConfig);
// console.log(“Storage Account Name:”, storageAccountName);
// console.log(“Azure Storage Connection String:”, azureStorageConnectionString);
// console.log(“JWT Secret:”, jwtSecret);
// console.log(“Search Endpoint:”, searchEndpoint);
// console.log(“Search API Key:”, searchApiKey);
// console.log(“OpenAI Endpoint:”, openaiEndpoint);
// console.log(“OpenAI API Key:”, openaiApiKey);
// Initialize OpenAI and Azure Search clients
const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey));
const userSearchClient = new SearchClient(searchEndpoint, ‘users-index’, new AzureKeyCredential(searchApiKey));
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
// Start server
const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
}).on(‘error’, error => {
console.error(“Error initializing application:”, error);
});
}
initializeApp().catch(error => {
console.error(“Error initializing application:”, error);
});
// Upload photo endpoint
app.post(‘/uploadphoto’, uploadStrategy, (req, res) => {
if (!req.file) {
return res.status(400).send(‘No file uploaded.’);
}
const blobName = `userphotos/${Date.now()}_${req.file.originalname}`;
const stream = getStream(req.file.buffer);
const streamLength = req.file.buffer.length;
const blobService = azureStorage.createBlobService(azureStorageConnectionString);
blobService.createBlockBlobFromStream(‘pics’, blobName, stream, streamLength, err => {
if (err) {
console.error(err);
res.status(500).send(‘Error uploading the file’);
} else {
const photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`;
res.status(200).send({ photoUrl });
}
});
});
// Register endpoint
app.post(‘/register’, uploadStrategy, async (req, res) => {
const { firstName, lastName, username, password, age, emailAddress, genres } = req.body;
if (!password) {
return res.status(400).send({ message: ‘Password is required’ });
}
let photoUrl = ”;
if (req.file) {
const blobName = `userphotos/${Date.now()}_${req.file.originalname}`;
const stream = getStream(req.file.buffer);
const streamLength = req.file.buffer.length;
const blobService = azureStorage.createBlobService(azureStorageConnectionString);
await new Promise((resolve, reject) => {
blobService.createBlockBlobFromStream(‘pics’, blobName, stream, streamLength, err => {
if (err) {
console.error(err);
reject(err);
} else {
photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`;
resolve();
}
});
});
}
const hashedPassword = await bcrypt.hash(password, 10);
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘username’, sql.NVarChar, username)
.input(‘password’, sql.NVarChar, hashedPassword)
.input(‘firstname’, sql.NVarChar, firstName)
.input(‘lastname’, sql.NVarChar, lastName)
.input(‘age’, sql.Int, age)
.input(’emailAddress’, sql.NVarChar, emailAddress)
.input(‘photoUrl’, sql.NVarChar, photoUrl)
.query(`
INSERT INTO Users
(Username, PasswordHash, FirstName, LastName, Age, EmailAddress, PhotoUrl)
VALUES
(@username, @password, @firstname, @lastname, @age, @emailAddress, @photoUrl);
SELECT SCOPE_IDENTITY() AS UserId;
`);
const userId = result.recordset[0].UserId;
if (genres && genres.length > 0) {
const genreNames = genres.split(‘,’); // Assuming genres are sent as a comma-separated string
for (const genreName of genreNames) {
let genreResult = await pool.request()
.input(‘genreName’, sql.NVarChar, genreName.trim())
.query(`
IF NOT EXISTS (SELECT 1 FROM Genres WHERE GenreName = @genreName)
BEGIN
INSERT INTO Genres (GenreName) VALUES (@genreName);
END
SELECT GenreId FROM Genres WHERE GenreName = @genreName;
`);
const genreId = genreResult.recordset[0].GenreId;
await pool.request()
.input(‘userId’, sql.Int, userId)
.input(‘genreId’, sql.Int, genreId)
.query(‘INSERT INTO UsersGenres (UserId, GenreId) VALUES (@userId, @genreId)’);
}
}
res.status(201).send({ message: ‘User registered successfully’ });
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error registering user’ });
}
});
// Login endpoint
app.post(‘/login’, async (req, res) => {
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘username’, sql.NVarChar, req.body.username)
.query(‘SELECT UserId, PasswordHash FROM Users WHERE Username = @username‘);
if (result.recordset.length === 0) {
return res.status(401).send({ message: ‘Invalid username or password’ });
}
const user = result.recordset[0];
const validPassword = await bcrypt.compare(req.body.password, user.PasswordHash);
if (!validPassword) {
return res.status(401).send({ message: ‘Invalid username or password’ });
}
const token = jwt.sign({ UserId: user.UserId }, jwtSecret, { expiresIn: ‘1h’ });
res.send({ token: token, UserId: user.UserId });
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error logging in’ });
}
});
// Get user data endpoint
app.get(‘/user/:UserId’, async (req, res) => {
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘UserId’, sql.Int, req.params.UserId)
.query(‘SELECT Username, FirstName, LastName, Age, EmailAddress, PhotoUrl FROM Users WHERE UserId = @UserId’);
if (result.recordset.length === 0) {
return res.status(404).send({ message: ‘User not found’ });
}
const user = result.recordset[0];
res.send(user);
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error fetching user data’ });
}
});
// AI Assistant endpoint for book questions and recommendations
app.post(‘/ai-assistant’, async (req, res) => {
const { query, userId } = req.body;
console.log(‘Received request body:’, req.body);
console.log(‘Extracted userId:’, userId);
try {
if (!userId) {
console.error(‘User ID is missing from the request.’);
return res.status(400).send({ message: ‘User ID is required.’ });
}
//console.log(`Received request for user ID: ${userId}`);
// Retrieve user data
let pool = await sql.connect(sqlConfig);
let userResult = await pool.request()
.input(‘UserId’, sql.Int, userId)
.query(‘SELECT * FROM Users WHERE UserId = @UserId’);
const user = userResult.recordset[0];
if (!user) {
console.error(`User with ID ${userId} not found.`);
return res.status(404).send({ message: `User with ID ${userId} not found.` });
}
console.log(`User data: ${JSON.stringify(user)}`);
if (query.toLowerCase().includes(“recommendation”)) {
// Fetch user genres
const userGenresResult = await pool.request()
.input(‘UserId’, sql.Int, userId)
.query(‘SELECT GenreName FROM Genres g JOIN UsersGenres ug ON g.GenreId = ug.GenreId WHERE ug.UserId = @UserId’);
const userGenres = userGenresResult.recordset.map(record => record.GenreName).join(‘ ‘);
//console.log(`User genres: ${userGenres}`);
// Fetch user embedding from search index
const userSearchClient = new SearchClient(searchEndpoint, ‘users-index’, new AzureKeyCredential(searchApiKey));
const userEmbeddingResult = await userSearchClient.getDocument(String(user.UserId));
const userEmbedding = userEmbeddingResult.Embedding;
//console.log(`User embedding result: ${JSON.stringify(userEmbeddingResult)}`);
//console.log(`User embedding: ${userEmbedding}`);
if (!userEmbedding || userEmbedding.length === 0) {
console.error(‘User embedding not found.’);
return res.status(500).send({ message: ‘User embedding not found.’ });
}
// Search for recommendations
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
const searchResponse = await bookSearchClient.search(“*”, {
vectors: [{
value: userEmbedding,
fields: [“Embedding”],
kNearestNeighborsCount: 5
}],
includeTotalCount: true,
select: [“Title”, “Author”]
});
const recommendations = [];
for await (const result of searchResponse.results) {
recommendations.push({
title: result.document.Title,
author: result.document.Author,
score: result.score
});
}
// Limit recommendations to top 5
const topRecommendations = recommendations.slice(0, 5);
return res.json({ response: “Here are some personalized recommendations for you:”, recommendations: topRecommendations });
} else {
// General book query
const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey));
const deploymentId = “gpt”; // Replace with your deployment ID
// Extract rating and genre from query
const ratingMatch = query.match(/rating over (d+(.d+)?)/);
const genreMatch = query.match(/genre (w+)/i);
const rating = ratingMatch ? parseFloat(ratingMatch[1]) : null;
const genre = genreMatch ? genreMatch[1] : null;
if (rating && genre) {
// Search for books with the specified genre and rating
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
const searchResponse = await bookSearchClient.search(“*”, {
filter: `Rating gt ${rating} and Genres/any(g: g eq ‘${genre}’)`,
top: 5,
select: [“Title”, “Author”, “Rating”]
});
const books = [];
for await (const result of searchResponse.results) {
books.push({
title: result.document.Title,
author: result.document.Author,
rating: result.document.Rating
});
}
const bookResponse = books.map(book => `${book.title} by ${book.author} with rating ${book.rating}`).join(‘n’);
return res.json({ response: `Here are 5 books with rating over ${rating} in ${genre} genre:n${bookResponse}` });
} else {
// Handle general queries about books using OpenAI with streaming chat completions
const events = await openaiClient.streamChatCompletions(
deploymentId,
[
{ role: “system”, content: “You are a helpful assistant that answers questions about books and provides personalized recommendations.” },
{ role: “user”, content: query }
],
{ maxTokens: 350 }
);
let aiResponse = “”;
for await (const event of events) {
for (const choice of event.choices) {
aiResponse += choice.delta?.content || ”;
}
}
return res.json({ response: aiResponse });
}
}
} catch (error) {
console.error(‘Error processing AI Assistant request:’, error);
return res.status(500).send({ message: ‘Error processing your request.’ });
}
});
As you can see apart form the registration and login endpoints we have the ai-assistant endpoint. Users are able not only to get personalized recommendations when the word “recommendations” is in the chat, but also information on Genres and ratings, again when these words are in the Chat request. Also they can chat regularly with the Assistant about books and literature!
The UI needs some fine tuning, we can add Chat History and you are welcome to do it![Done] Please find the code in GitHub and in case you need help let me know !
Conclusion
We just build our own Web AI Assistant with an enhanced recommendation engine, utilizing a number of Azure and Microsoft Services. It is important to prepare well ahead of such a project, load yourself with patience and be prepared to make mistakes and learn ! I reached 15 Docker Images for the backend to have a basic functionality ! But hey i did it for everyone so you can just grab it and enjoy it, even make it better! Thank you for staying up to this point!
References
Azure SDK for JavaScriptAzure AI SearchCreate a Vector IndexGenerate EmbeddingsFabric: Introduction to deployment pipelinesDevelop, execute, and manage Microsoft Fabric notebooks
How to create an AI Web App with Azure OpenAI, Azure AI Search with Vector Embeddings and Microsoft Fabric Pipelines IntroToday, we embark on an exciting journey to build an AI Assistant and Recommendations bot with cutting-edge features, helping users decide which Book is best suitable for their preferences. Our bot will handle various interactions, such as, providing customized recommendations, and engaging in chat conversations. Additionally, users can register and log in to this Azure Cloud-native AI application. Microsoft Fabric will handle, automation and AI related tasks such as:Load and clean the books Dataset with triggered Pipelines and NotebooksTransform the Dataset to JSON and making proper adjustments for Vector usabilityLoad the cleaned and transformed Dataset to Azure AI Search and configuring Vector and Semantic profilesCreate and save embeddings with Azure OpenAI to Azure AI SearchAs you may already guessed our foundation lies in Microsoft Fabric, leveraging its powerful Python Notebooks, Pipelines, and Datalake toolsets. We’ll integrate these tools with a custom Identity Database and an AI Assistant. Our mission? To explore the core AI functionalities that set modern applications apart—think embeddings, semantic kernel, and vectors. As we navigate Microsoft Azure’s vast offerings, we’ll build our solution from scratch..Prerequisites for WorkshopApart from this guide, everything will be shared through GitHub; nevertheless we need:Azure Subscription, access to Azure OpenAI with text-embeddings ad chat-gpt deployments, Microsoft Fabric with a Pro license (trial is fine), patience and excitement!InfrastructureI do respect everyone’s time and i am going to point you to the Git Hub repo that holds the whole implementation, along with Terraform automation. We will start with the SQL query that is running within terraform. The query needs the following code: CREATE TABLE Users (
UserId INT IDENTITY(1,1) PRIMARY KEY,
FirstName NVARCHAR(50) NOT NULL,
LastName NVARCHAR(50) NOT NULL,
Username NVARCHAR(50) UNIQUE NOT NULL,
PasswordHash NVARCHAR(255) NOT NULL,
Age INT NOT NULL,
photoUrl NVARCHAR(500) NOT NULL
);
— Genres table
CREATE TABLE Genres (
GenreId INT PRIMARY KEY IDENTITY(1,1),
GenreName NVARCHAR(50)
);
— UsersGenres join table
CREATE TABLE UsersGenres (
UserId INT,
GenreId INT,
FOREIGN KEY (UserId) REFERENCES Users(UserId),
FOREIGN KEY (GenreId) REFERENCES Genres(GenreId)
);
ALTER DATABASE usersdb01
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON) We have enabled Change Tracking in case we wan to trigger the Embeddings creation upon each change on the Database.You can see we are using a JOIN statement to handle users and genres since the various genres selected by the users will help the assistant to make recommendations. We are also enabling Change Tracking so we can trigger updates for the Vector once a change is made. Keep in mind you need the sqlcmd installed on your Workstation !Vector, Embeddings & Fabric PipelinesYes you read it well! We are going to get a Books Dataset from Kaggle, clean it , transform it and upload it to AI Search, where we will create an index for the books. We will also create and store the embeddings using Vector Profile from AI Search. In a similar manner we will get the Users from SQL and upload them to AI Search users index, create the embeddings and save them as well. The real exciting stuff is that we will use Microsoft Fabric Pipelines and Notebooks for the books index and embeddings ! So it is important to have a Fabric Pro Trial License with the minimum capacity enabled.Books DatasetThe ultimate purpose here is to achieve automation for the creation of embeddings for both Books and Users datasets, so on the Web App we can get recommendations based on preferences but also on actual queries we set to the AI Assistant. We will get a main books dataset as Delimited Text (CSV) and transform it to JSON with correct format so it can be uploaded to Azure AI Search index, utilizing the native AI Search vector profiles and Azure OpenAI for the embeddings. The Fabric Pipelines will be triggered on schedule and we will explore other possible ways.In Microsoft Fabric, Notebooks are an important tool as in most modern Data Platforms. The managed Spark Clusters allows us to create and execute powerful scripts in the form of Python Notebooks (PySpark), add them in a Pipeline and build solid Projects and Solutions. Microsoft Fabric provides the ability to pre install libraries and configure our Spark Compute within Environments, so our code will have all requirements in this managed environment. In our case we will install all required libraries and also pin the OpenAI version to pre 1.0.0 for this project. But let’s take it from the start. We need to access app.fabric.microsoft.com and create a new Workspace with a Trial Pro License. It should look like this and also has the diamond icon: Once we have our Workspace in place we can select it and from the left menu select New and create the Environment and later a Lakehouse. The Environment settings that worked for me are the following, you can see that we just install Public Libraries: Fabric Environment: OpenAI PinningSince all the code will be available on GitHub i prefer to explore the next task, create the Pipeline, which will contain the Notebooks. Select your Workspace icon on the left vertical menu, find the NEW+ drop-down menu and More Options until you find the Data Pipeline. You will be presented with the familiar SynapseData Factory dashboard (quite similar) where we can start inserting our activities. You have to create all Notebooks before hands just to keep everything in order. So based on the GitHub we will have 5 Notebooks ready. The Fabric API does not support yet firing pipelines, it will happen eventually, so can either schedule or work with Event Stream. The Reflex supports same Directory Azure Connections only ( We will have a look another time), but our Subscription is on another Tenant so yeah! Schedule it is !The Pipeline has the following activities: Let’s shed some light ! We assume that the Dataset is stored in Blob Storage Account so we get that CSV into the Lakehouse. First Notebook is cleaning the data with Python, remove nulls, remove non-English characters and so on. Since the activity stores it as part of a Folder-like structure with non-direct access we need a task to save it on our Lakehouse. We then transform to JSON, make the JSON a correct array set of records, again save it to Lakehouse and the last 2 Notebooks are creating the AI Search Index, uploading the JSON to AI Search, configure the AI Search with vector and semantic profiles and get all records to create embeddings from Azure OpenAI and store those back to AI Search. Due to the great number of Documents we apply rate-limit evasion (back-off) and you can be sure this will take almost 30 minutes to conclude for around 9500 records.Users DatasetMost of the workflow is similar for the users index and embeddings. The difference is that our users are stored and updated with new ones, in an Azure SQL Database. Since we utilize pipelines, Microsoft Fabric natively connects to Azure SQL and in fact our activity is a Copy Task but we have a query to bring SQL data. SELECT u.UserId, u.Age, STRING_AGG(g.GenreName, ‘,’) AS Genres
FROM Users u
JOIN UsersGenres ug ON u.UserId = ug.UserId
JOIN Genres g ON ug.GenreId = g.GenreId
GROUP BY u.UserId, u.Age This SQL query is selecting data from three related tables: Users, UsersGenres, and Genres. Specifically, it’s returning a list of users (based on their UserId and Age) along with a comma-separated list of all the genres associated with each user. The STRING_AGG function is used to concatenate the GenreName into a single string, separated by commas. The JOIN operations are used to link the tables together based on common fields – in this case, the UserId in the Users and UsersGenres tables, and the GenreId in the UsersGenres and Genres tables. The GROUP BY clause is grouping the results by both UserId and Age, meaning that each row in the output will represent a unique combination of these two fields.So it is a simpler process after all, and due to the small amount of users ( i can only subscribe up to 5-6 imaginary accounts ! ), it is a quicker process. So what have we done so far ? Well let’s break it down, shall we ?ProcessCreated the main Infrastructure using Terraform – available on GitHubThe Infra provides a Web UI where we register as users and select favorite book Genres, and can login into a Dashboard that we have access to an AI Assistant. The database used to store User’s info is Azure SQL. The Infrastructure consists also of Azure Key Vault, Azure Container Registry, Azure AI Search and Azure Web Apps. A separate Azure OpenAI is already in place.The backend creates a Join Table to store UserId with Genres so later it will be easier to create personalized recommendationsWe got a Books dataset with [id, Author, Title, Genres, Rating] fields and upload it to Azure Blob StorageWe activated Trial (or just have available) license for Microsoft Fabric capacityWe created Jupyter Notebooks to clean the source books dataset, transform it and store it as JSONWe created a Fabric Pipeline integrating these Notebooks and new ones that create a books-index in Azure AI Search, configure it with Vector and Semantic Profiles and uploaded all JSON records in itThe Pipeline continues with additional Notebooks that create embeddings with Azure OpenAI and store this embeddings back in Azure AI Search.A new Pipeline has been deployed, that gets the Users data with a query that combines the Genres information with Users from the Azure SQL Database resource and stores it as JSONThe users Pipeline creates and configures a new users-index in Azure AI Search, configures Vector and Semantic profiles and creates embeddings, for all data, with Azure OpenAI and stores the embeddings back to the index.Now we are left with the Backend details and maybe some minor changes for the Frontend. As you will see the GitHub repo contains all required files to create a Docker Image, push it to Container Registry and create a Web App in Azure Web Apps. Use: [ docker build -t backend . ] and tag and push: [ docker tag backend {acrname}.azurecr.io/backend:v1 ] , [ docker push {acrname}.azurecr.io/backend:v1 ]. We will be able to see our new Repo on Azure Container Registry and deploy our new Web App : Don’t forget to add * in CORS settings for the backend Web App!The overall Architecture is like this: The only variable needed for the Backend Web App is the KeyVault name and the User Assigned Managed Identity ID. All access to other services (SQL, Storage Account, Ai Search, Azure OpenAI) is going through Key Vault Secrets.Let’s have a quick look on our Backend import dotenv from ‘dotenv’;
import express from ‘express’;
import sql from ‘mssql’;
import bcrypt from ‘bcrypt’;
import jwt from ‘jsonwebtoken’;
import multer from ‘multer’;
import azureStorage from ‘azure-storage’;
import getStream from ‘into-stream’;
import cors from ‘cors’;
import { SecretClient } from “@azure/keyvault-secrets”;
import { DefaultAzureCredential } from “@azure/identity”;
import { OpenAIClient, AzureKeyCredential } from ‘@azure/openai’;
import { SearchClient } from ‘@azure/search-documents’;
import bodyParser from ‘body-parser’;
dotenv.config();
const app = express();
app.use(cors({ origin: ‘*’ }));
app.use((req, res, next) => {
res.setHeader(‘X-Content-Type-Options’, ‘nosniff’);
next();
});
app.use(express.json());
// set up rate limiter: maximum of five requests per minute
var RateLimit = require(‘express-rate-limit’);
var limiter = RateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // max 100 requests per windowMs
});
// apply rate limiter to all requests
app.use(limiter);
app.get(‘/:path’, function(req, res) {
let path = req.params.path;
if (isValidPath(path))
res.sendFile(path);
});
const vaultName = process.env.AZURE_KEY_VAULT_NAME;
const vaultUrl = `https://${vaultName}.vault.azure.net`;
const credential = new DefaultAzureCredential({
managedIdentityClientId: process.env.MANAGED_IDENTITY_CLIENT_ID, // Use environment variable for managed identity client ID
});
const secretClient = new SecretClient(vaultUrl, credential);
async function getSecret(secretName) {
const secret = await secretClient.getSecret(secretName);
return secret.value;
}
const inMemoryStorage = multer.memoryStorage();
const uploadStrategy = multer({ storage: inMemoryStorage }).single(‘photo’);
let sqlConfig;
let storageAccountName;
let azureStorageConnectionString;
let jwtSecret;
let searchEndpoint;
let searchApiKey;
let openaiEndpoint;
let openaiApiKey;
async function initializeApp() {
sqlConfig = {
user: await getSecret(“sql-admin-username”),
password: await getSecret(“sql-admin-password”),
database: await getSecret(“sql-database-name”),
server: await getSecret(“sql-server-name”),
options: {
encrypt: true,
trustServerCertificate: false
}
};
storageAccountName = await getSecret(“storage-account-name”);
azureStorageConnectionString = await getSecret(“storage-account-connection-string”);
jwtSecret = await getSecret(“jwt-secret”);
searchEndpoint = await getSecret(“search-endpoint”);
searchApiKey = await getSecret(“search-apikey”);
openaiEndpoint = await getSecret(“openai-endpoint”);
openaiApiKey = await getSecret(“openai-apikey”);
//console.log(“SQL Config:”, sqlConfig);
// console.log(“Storage Account Name:”, storageAccountName);
// console.log(“Azure Storage Connection String:”, azureStorageConnectionString);
// console.log(“JWT Secret:”, jwtSecret);
// console.log(“Search Endpoint:”, searchEndpoint);
// console.log(“Search API Key:”, searchApiKey);
// console.log(“OpenAI Endpoint:”, openaiEndpoint);
// console.log(“OpenAI API Key:”, openaiApiKey);
// Initialize OpenAI and Azure Search clients
const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey));
const userSearchClient = new SearchClient(searchEndpoint, ‘users-index’, new AzureKeyCredential(searchApiKey));
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
// Start server
const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
}).on(‘error’, error => {
console.error(“Error initializing application:”, error);
});
}
initializeApp().catch(error => {
console.error(“Error initializing application:”, error);
});
// Upload photo endpoint
app.post(‘/uploadphoto’, uploadStrategy, (req, res) => {
if (!req.file) {
return res.status(400).send(‘No file uploaded.’);
}
const blobName = `userphotos/${Date.now()}_${req.file.originalname}`;
const stream = getStream(req.file.buffer);
const streamLength = req.file.buffer.length;
const blobService = azureStorage.createBlobService(azureStorageConnectionString);
blobService.createBlockBlobFromStream(‘pics’, blobName, stream, streamLength, err => {
if (err) {
console.error(err);
res.status(500).send(‘Error uploading the file’);
} else {
const photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`;
res.status(200).send({ photoUrl });
}
});
});
// Register endpoint
app.post(‘/register’, uploadStrategy, async (req, res) => {
const { firstName, lastName, username, password, age, emailAddress, genres } = req.body;
if (!password) {
return res.status(400).send({ message: ‘Password is required’ });
}
let photoUrl = ”;
if (req.file) {
const blobName = `userphotos/${Date.now()}_${req.file.originalname}`;
const stream = getStream(req.file.buffer);
const streamLength = req.file.buffer.length;
const blobService = azureStorage.createBlobService(azureStorageConnectionString);
await new Promise((resolve, reject) => {
blobService.createBlockBlobFromStream(‘pics’, blobName, stream, streamLength, err => {
if (err) {
console.error(err);
reject(err);
} else {
photoUrl = `https://${storageAccountName}.blob.core.windows.net/pics/${blobName}`;
resolve();
}
});
});
}
const hashedPassword = await bcrypt.hash(password, 10);
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘username’, sql.NVarChar, username)
.input(‘password’, sql.NVarChar, hashedPassword)
.input(‘firstname’, sql.NVarChar, firstName)
.input(‘lastname’, sql.NVarChar, lastName)
.input(‘age’, sql.Int, age)
.input(’emailAddress’, sql.NVarChar, emailAddress)
.input(‘photoUrl’, sql.NVarChar, photoUrl)
.query(`
INSERT INTO Users
(Username, PasswordHash, FirstName, LastName, Age, EmailAddress, PhotoUrl)
VALUES
(@username, @password, @firstname, @lastname, @age, @emailAddress, @photoUrl);
SELECT SCOPE_IDENTITY() AS UserId;
`);
const userId = result.recordset[0].UserId;
if (genres && genres.length > 0) {
const genreNames = genres.split(‘,’); // Assuming genres are sent as a comma-separated string
for (const genreName of genreNames) {
let genreResult = await pool.request()
.input(‘genreName’, sql.NVarChar, genreName.trim())
.query(`
IF NOT EXISTS (SELECT 1 FROM Genres WHERE GenreName = @genreName)
BEGIN
INSERT INTO Genres (GenreName) VALUES (@genreName);
END
SELECT GenreId FROM Genres WHERE GenreName = @genreName;
`);
const genreId = genreResult.recordset[0].GenreId;
await pool.request()
.input(‘userId’, sql.Int, userId)
.input(‘genreId’, sql.Int, genreId)
.query(‘INSERT INTO UsersGenres (UserId, GenreId) VALUES (@userId, @genreId)’);
}
}
res.status(201).send({ message: ‘User registered successfully’ });
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error registering user’ });
}
});
// Login endpoint
app.post(‘/login’, async (req, res) => {
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘username’, sql.NVarChar, req.body.username)
.query(‘SELECT UserId, PasswordHash FROM Users WHERE Username = @username’);
if (result.recordset.length === 0) {
return res.status(401).send({ message: ‘Invalid username or password’ });
}
const user = result.recordset[0];
const validPassword = await bcrypt.compare(req.body.password, user.PasswordHash);
if (!validPassword) {
return res.status(401).send({ message: ‘Invalid username or password’ });
}
const token = jwt.sign({ UserId: user.UserId }, jwtSecret, { expiresIn: ‘1h’ });
res.send({ token: token, UserId: user.UserId });
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error logging in’ });
}
});
// Get user data endpoint
app.get(‘/user/:UserId’, async (req, res) => {
try {
let pool = await sql.connect(sqlConfig);
let result = await pool.request()
.input(‘UserId’, sql.Int, req.params.UserId)
.query(‘SELECT Username, FirstName, LastName, Age, EmailAddress, PhotoUrl FROM Users WHERE UserId = @UserId’);
if (result.recordset.length === 0) {
return res.status(404).send({ message: ‘User not found’ });
}
const user = result.recordset[0];
res.send(user);
} catch (error) {
console.error(error);
res.status(500).send({ message: ‘Error fetching user data’ });
}
});
// AI Assistant endpoint for book questions and recommendations
app.post(‘/ai-assistant’, async (req, res) => {
const { query, userId } = req.body;
console.log(‘Received request body:’, req.body);
console.log(‘Extracted userId:’, userId);
try {
if (!userId) {
console.error(‘User ID is missing from the request.’);
return res.status(400).send({ message: ‘User ID is required.’ });
}
//console.log(`Received request for user ID: ${userId}`);
// Retrieve user data
let pool = await sql.connect(sqlConfig);
let userResult = await pool.request()
.input(‘UserId’, sql.Int, userId)
.query(‘SELECT * FROM Users WHERE UserId = @UserId’);
const user = userResult.recordset[0];
if (!user) {
console.error(`User with ID ${userId} not found.`);
return res.status(404).send({ message: `User with ID ${userId} not found.` });
}
console.log(`User data: ${JSON.stringify(user)}`);
if (query.toLowerCase().includes(“recommendation”)) {
// Fetch user genres
const userGenresResult = await pool.request()
.input(‘UserId’, sql.Int, userId)
.query(‘SELECT GenreName FROM Genres g JOIN UsersGenres ug ON g.GenreId = ug.GenreId WHERE ug.UserId = @UserId’);
const userGenres = userGenresResult.recordset.map(record => record.GenreName).join(‘ ‘);
//console.log(`User genres: ${userGenres}`);
// Fetch user embedding from search index
const userSearchClient = new SearchClient(searchEndpoint, ‘users-index’, new AzureKeyCredential(searchApiKey));
const userEmbeddingResult = await userSearchClient.getDocument(String(user.UserId));
const userEmbedding = userEmbeddingResult.Embedding;
//console.log(`User embedding result: ${JSON.stringify(userEmbeddingResult)}`);
//console.log(`User embedding: ${userEmbedding}`);
if (!userEmbedding || userEmbedding.length === 0) {
console.error(‘User embedding not found.’);
return res.status(500).send({ message: ‘User embedding not found.’ });
}
// Search for recommendations
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
const searchResponse = await bookSearchClient.search(“*”, {
vectors: [{
value: userEmbedding,
fields: [“Embedding”],
kNearestNeighborsCount: 5
}],
includeTotalCount: true,
select: [“Title”, “Author”]
});
const recommendations = [];
for await (const result of searchResponse.results) {
recommendations.push({
title: result.document.Title,
author: result.document.Author,
score: result.score
});
}
// Limit recommendations to top 5
const topRecommendations = recommendations.slice(0, 5);
return res.json({ response: “Here are some personalized recommendations for you:”, recommendations: topRecommendations });
} else {
// General book query
const openaiClient = new OpenAIClient(openaiEndpoint, new AzureKeyCredential(openaiApiKey));
const deploymentId = “gpt”; // Replace with your deployment ID
// Extract rating and genre from query
const ratingMatch = query.match(/rating over (d+(.d+)?)/);
const genreMatch = query.match(/genre (w+)/i);
const rating = ratingMatch ? parseFloat(ratingMatch[1]) : null;
const genre = genreMatch ? genreMatch[1] : null;
if (rating && genre) {
// Search for books with the specified genre and rating
const bookSearchClient = new SearchClient(searchEndpoint, ‘books-index’, new AzureKeyCredential(searchApiKey));
const searchResponse = await bookSearchClient.search(“*”, {
filter: `Rating gt ${rating} and Genres/any(g: g eq ‘${genre}’)`,
top: 5,
select: [“Title”, “Author”, “Rating”]
});
const books = [];
for await (const result of searchResponse.results) {
books.push({
title: result.document.Title,
author: result.document.Author,
rating: result.document.Rating
});
}
const bookResponse = books.map(book => `${book.title} by ${book.author} with rating ${book.rating}`).join(‘n’);
return res.json({ response: `Here are 5 books with rating over ${rating} in ${genre} genre:n${bookResponse}` });
} else {
// Handle general queries about books using OpenAI with streaming chat completions
const events = await openaiClient.streamChatCompletions(
deploymentId,
[
{ role: “system”, content: “You are a helpful assistant that answers questions about books and provides personalized recommendations.” },
{ role: “user”, content: query }
],
{ maxTokens: 350 }
);
let aiResponse = “”;
for await (const event of events) {
for (const choice of event.choices) {
aiResponse += choice.delta?.content || ”;
}
}
return res.json({ response: aiResponse });
}
}
} catch (error) {
console.error(‘Error processing AI Assistant request:’, error);
return res.status(500).send({ message: ‘Error processing your request.’ });
}
}); As you can see apart form the registration and login endpoints we have the ai-assistant endpoint. Users are able not only to get personalized recommendations when the word “recommendations” is in the chat, but also information on Genres and ratings, again when these words are in the Chat request. Also they can chat regularly with the Assistant about books and literature!User Registration User Registration The UI needs some fine tuning, we can add Chat History and you are welcome to do it![Done] Please find the code in GitHub and in case you need help let me know !ConclusionWe just build our own Web AI Assistant with an enhanced recommendation engine, utilizing a number of Azure and Microsoft Services. It is important to prepare well ahead of such a project, load yourself with patience and be prepared to make mistakes and learn ! I reached 15 Docker Images for the backend to have a basic functionality ! But hey i did it for everyone so you can just grab it and enjoy it, even make it better! Thank you for staying up to this point!ReferencesAzure SDK for JavaScriptAzure AI SearchCreate a Vector IndexGenerate EmbeddingsFabric: Introduction to deployment pipelinesDevelop, execute, and manage Microsoft Fabric notebooks Read More
Outlook Categories Issue
Hi,
I typically use Categories a lot on Outlook. But some time ago, I had a big issue with automatic category creation that I hadn’t created. I typically deleted these new categories, but they are returning with time.
I would appreciate any help.
Thank you,
Hi, I typically use Categories a lot on Outlook. But some time ago, I had a big issue with automatic category creation that I hadn’t created. I typically deleted these new categories, but they are returning with time. I would appreciate any help. Thank you, Read More
Replace existing map route in ASP.NET Core
I would like replace these routes only inside Startup.cs written in ASP.NET Core 8.
How can I replace the existing action routes /login/user-registration… with the following localized routes?/en/login/it/login /en/user-registration/it/registrazione-utente … en, it refer to the culture.I would like replace these routes only inside Startup.cs written in ASP.NET Core 8. Read More
Domain impersonation in hybrid
Hi all,
I’ve strange behavior in my Exchange hybrid deployment.
I have 2 internal Exchange 2016 mailbox servers and 2 Edge 2016 servers. All mailboxes are still hosted onpremises. Hybrid configuration is in place. The MX record (company.com) points to Exchange Online, emails are then routed to Edge servers and then to internal mailboxes. Outbound email is routed to the Edge servers and then to Exchange Online and to the external recipient.
I’ve configured the Anti Phishing policy to protect all my domains for domain impersonation. Now, every mail that is sent to extern recipients are detected as impersonation attempt of my domain “company.com”. Both Edge server public IP addresses are part of my SPF record. All certificates and connector seems fine. When I send an email from onpremises to an internal mailbox that is hosted in Exchange Online, SPF check is passed and the mail is considered to be internal.
I know I can disable impersonation protection for this domain, but that is not resolving the root cause. So what could cause the detection for every single mail to external recipients?
Hi all, I’ve strange behavior in my Exchange hybrid deployment. I have 2 internal Exchange 2016 mailbox servers and 2 Edge 2016 servers. All mailboxes are still hosted onpremises. Hybrid configuration is in place. The MX record (company.com) points to Exchange Online, emails are then routed to Edge servers and then to internal mailboxes. Outbound email is routed to the Edge servers and then to Exchange Online and to the external recipient. I’ve configured the Anti Phishing policy to protect all my domains for domain impersonation. Now, every mail that is sent to extern recipients are detected as impersonation attempt of my domain “company.com”. Both Edge server public IP addresses are part of my SPF record. All certificates and connector seems fine. When I send an email from onpremises to an internal mailbox that is hosted in Exchange Online, SPF check is passed and the mail is considered to be internal. I know I can disable impersonation protection for this domain, but that is not resolving the root cause. So what could cause the detection for every single mail to external recipients? Read More
Task bar – immobility
Please allow us to move our task bars again. I’ve been using mine on the right for as long as I can remember. Seems odd that it would have been removed in the first place.
Thanks.
Please allow us to move our task bars again. I’ve been using mine on the right for as long as I can remember. Seems odd that it would have been removed in the first place. Thanks. Read More
Microsoft Roadmap
I am building a Microsoft Roadmap, which is comprised of several connected plans taken from Microsoft Project for the Web. I seem to be having issues with project plan connectivity; every time I go into the Roadmap, there are projects that aren’t syncing updates. These are highlighted in red in the screenshot below. The connection can be fixed, by following the steps suggested, but the problem then returns after about an hour.
I am building a Microsoft Roadmap, which is comprised of several connected plans taken from Microsoft Project for the Web. I seem to be having issues with project plan connectivity; every time I go into the Roadmap, there are projects that aren’t syncing updates. These are highlighted in red in the screenshot below. The connection can be fixed, by following the steps suggested, but the problem then returns after about an hour. Read More
Intune role assignment does not work
Hi,
I am unable to assign Intune roles (both built-in and custom) to Entra groups/members. I have a security group in Entra with direct member. I have assigned the group to one built-in role and one custom role in Intune. However, the user in the group does not receive any permissions (https://intune.microsoft.com/#view/Microsoft_Intune_DeviceSettings/RolesLandingMenuBlade/~/myPermiss…) and has no access to the information defined in the Intune roles.
Any solution? What have I missed?
Hi, I am unable to assign Intune roles (both built-in and custom) to Entra groups/members. I have a security group in Entra with direct member. I have assigned the group to one built-in role and one custom role in Intune. However, the user in the group does not receive any permissions (https://intune.microsoft.com/#view/Microsoft_Intune_DeviceSettings/RolesLandingMenuBlade/~/myPermiss…) and has no access to the information defined in the Intune roles. Any solution? What have I missed? Read More
Alter the value of one MS List column dependant on another
Hi
I have tried searching for days but i think i may be getting the wording wrong.
I have an MS list hosted on a sharepoint site, the focus is 2 colums for this query.
When an item is created a column PDC is marked as Yes or No, with a choice.
Another column is Item status, with Open or Closed as choices.
I am trying to create and write a flow that when an item is marked as Closed, it marks PDC as No automatically. The flow i create, when i try and apply a True or False condition, and for True add an action of ‘Update item’ it presents me with a blank item template (as though i am creating a new item). I am wanting to update the item that has been changed to Closed, not an entire new one.
Screen shot is below just in case i am not explaining it well.
Thanks everyone
HiI have tried searching for days but i think i may be getting the wording wrong. I have an MS list hosted on a sharepoint site, the focus is 2 colums for this query. When an item is created a column PDC is marked as Yes or No, with a choice. Another column is Item status, with Open or Closed as choices. I am trying to create and write a flow that when an item is marked as Closed, it marks PDC as No automatically. The flow i create, when i try and apply a True or False condition, and for True add an action of ‘Update item’ it presents me with a blank item template (as though i am creating a new item). I am wanting to update the item that has been changed to Closed, not an entire new one. Screen shot is below just in case i am not explaining it well. Thanks everyone Read More
Is there a way to force keyboard type for the HTML 5 client?
We have an older application that is not accepting keyboard input in certain controls. In doing some investigation, it was determined that setting the remote keyboard layout to “US (QWERTY)” allows entry for the controls.
Rather than having end users go into the settings of the client (which is lost across browsers/machines) we’d like to figure out a way to “force” this setting.
Is there a cmdlet or some other way we can have the HTML 5 client always use that keyboard?
Thanks,
Tim
We have an older application that is not accepting keyboard input in certain controls. In doing some investigation, it was determined that setting the remote keyboard layout to “US (QWERTY)” allows entry for the controls. Rather than having end users go into the settings of the client (which is lost across browsers/machines) we’d like to figure out a way to “force” this setting. Is there a cmdlet or some other way we can have the HTML 5 client always use that keyboard? Thanks,Tim Read More