Month: July 2024
Need advise on the below powershell script to deploy on intune devices
Hi Champs,
Not sure if below script is correct. Looks like it is working if we run powershell as administrator but otherwise throwing errors. I want to deploy below script to endpoint devices via intune. Please assist.
I want to set the below location path at word, options, save, Default personal templates location
ForEach ($user in (Get-ChildItem “C:Users” -Exclude Public))
{
$location = “C:Users$($user.Name)DocumentsCustom Office Templates”
$IsPresent = Get-ItemProperty ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ | ForEach-Object {If($_ -like ‘*PersonalTemplates*’){ Return ‘True’ }}
if(-Not($IsPresent -eq ‘True’))
{
New-ItemProperty -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’ -Value $location -PropertyType ExpandString -Force
New-Item -ItemType Directory -Force -Path $location
}
$existingValue= Get-ItemPropertyValue -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’
if([string]::IsNullOrWhiteSpace($existingValue)){
Set-ItemProperty -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’ -Value $location
}
else{
$location=$existingValue
if(!(test-path $existingValue))
{
New-Item -ItemType Directory -Force -Path $existingValue
}
}
}
Regards,
Ram
Hi Champs, Not sure if below script is correct. Looks like it is working if we run powershell as administrator but otherwise throwing errors. I want to deploy below script to endpoint devices via intune. Please assist.I want to set the below location path at word, options, save, Default personal templates location ForEach ($user in (Get-ChildItem “C:Users” -Exclude Public)){$location = “C:Users$($user.Name)DocumentsCustom Office Templates”$IsPresent = Get-ItemProperty ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ | ForEach-Object {If($_ -like ‘*PersonalTemplates*’){ Return ‘True’ }}if(-Not($IsPresent -eq ‘True’)){New-ItemProperty -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’ -Value $location -PropertyType ExpandString -ForceNew-Item -ItemType Directory -Force -Path $location}$existingValue= Get-ItemPropertyValue -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’if([string]::IsNullOrWhiteSpace($existingValue)){Set-ItemProperty -Path ‘HKCU:SOFTWAREMicrosoftOffice16.0WordOptions’ -Name ‘PersonalTemplates’ -Value $location}else{$location=$existingValueif(!(test-path $existingValue)){New-Item -ItemType Directory -Force -Path $existingValue}}} Regards,Ram Read More
Is there a way to detect a user logged into Microsoft Edge via c# code
One of our clients have staff members authenticated via SSO in Microsoft Edge using Entra – https://learn.microsoft.com/en-us/deployedge/microsoft-edge-security-identity – Does anyone happen to know if there is a way of being able to detect the user that is logged into Microsoft Edge via c# code? The screenshot below shows what I mean when I say a user is logged into Microsoft Edge.
I have looked at the httpcontext and the cookies, but cannot see anything that would help me detect the user that is logged into Microsoft Edge.
If this is not possible, that is also fine, but would be good to know one way or another so any help would be appreciated.
One of our clients have staff members authenticated via SSO in Microsoft Edge using Entra – https://learn.microsoft.com/en-us/deployedge/microsoft-edge-security-identity – Does anyone happen to know if there is a way of being able to detect the user that is logged into Microsoft Edge via c# code? The screenshot below shows what I mean when I say a user is logged into Microsoft Edge. I have looked at the httpcontext and the cookies, but cannot see anything that would help me detect the user that is logged into Microsoft Edge.If this is not possible, that is also fine, but would be good to know one way or another so any help would be appreciated. Read More
Not being able to stop editing of sharepoint document library after user filled column once.
Hi All
I want to stop people editing their entries once they upload the document and fill other fields like name, date etc, So to achieve this i have removed the edit permission of member. But the problem in document library is that right after the upload of document it is not allowing the user to fill the rest of the columns. Is their any solution to this?
Thank you in advance….
Hi All I want to stop people editing their entries once they upload the document and fill other fields like name, date etc, So to achieve this i have removed the edit permission of member. But the problem in document library is that right after the upload of document it is not allowing the user to fill the rest of the columns. Is their any solution to this?Thank you in advance…. Read More
Windows Update keeps on failing
Some update files are missing or have problems. We’ll try to download the update again later. Error code: (0x8007000d)
Some update files are missing or have problems. We’ll try to download the update again later. Error code: (0x8007000d) Read More
Twitter to MP4 help! How to download Twitter videos in MP4 on Windows PC?
Hi everyone,
Twitter is a nice place to found some funny videos and I want to download them on my Windows PC sometimes. Currently, I’m looking for a way to download videos from Twitter and save them as MP4 files. I tried various online tools and browser extensions, but they are unable to the parse the video from Twitter link.
Can anyone recommend a trustworthy method or tool for downloading Twitter to MP4? Ideally, I’d prefer something that’s easy to use and doesn’t require installing any software.
Thanks in advance!
Hi everyone,Twitter is a nice place to found some funny videos and I want to download them on my Windows PC sometimes. Currently, I’m looking for a way to download videos from Twitter and save them as MP4 files. I tried various online tools and browser extensions, but they are unable to the parse the video from Twitter link. Can anyone recommend a trustworthy method or tool for downloading Twitter to MP4? Ideally, I’d prefer something that’s easy to use and doesn’t require installing any software. Thanks in advance! Read More
Customize Sharepoint App Bar, Hide Create Button
Is it possible to customize the sp app bar, for example hide the Create-Button/define what the users can create with it? For example no Lists/Sharepointsite?
Also, does anyone know if Microsoft will force Organisations with existing Tenants to use the SP App Bar? Because at the moment, in our existing Tenant, we have it deactivated, because we dont/cant use it as it is. So it would be bad if Microsoft force activates the App Bar for us in our Tenant
Is it possible to customize the sp app bar, for example hide the Create-Button/define what the users can create with it? For example no Lists/Sharepointsite? Also, does anyone know if Microsoft will force Organisations with existing Tenants to use the SP App Bar? Because at the moment, in our existing Tenant, we have it deactivated, because we dont/cant use it as it is. So it would be bad if Microsoft force activates the App Bar for us in our Tenant Read More
Surface 3 Hub – install Power BI Apps or Edge
Hi there,
we already ran 2 Surface hubs 2s in our Department and we got now 3 new Surface 3 Hubs, but all they are able to, is to run Teams?
How do i install new apps or Edge? We want to run Power BI or a Http Link in the time, we dont use Teams
Hi there, we already ran 2 Surface hubs 2s in our Department and we got now 3 new Surface 3 Hubs, but all they are able to, is to run Teams?How do i install new apps or Edge? We want to run Power BI or a Http Link in the time, we dont use Teams Read More
JSON Shareoint list working from link but not from list
We have a sharepoint list that is populated with different data. I have set some JSON on the body to make this data better to use and visible in one overview. However since a couple of weeks the JSON is not working correct anymore. At first I thought it wasnt working at all but then I noticed that it did give me the different sections I created, it just put all the data in list form an not in a nice overview. I tried different options on the settings but for some reason the format kept as is.
There are some power flows attached and one of the flows sends an email with a link to an item from the list. The strange ting is that when we use the link, the JSON is completely working. So it appears that when we enter the item via the link the JSON is working, but when we enter the item via the list itself the JSON format is not working.
JSON;
{
“sections”: [
{
“displayname”: “Algemene Materiaal Gegevens”,
“fields”: [
“UIN Code”,
“UIN description”,
“Material Group”,
“UNSPC”,
“Base UoM”,
“Alternative UoM”,
“Warehouse”,
“Product Contact”,
“MSDS Required”,
“Stock Item”,
“Repair”,
“Reason of request”,
“Maintenance Engineer”,
“Floc”,
“P&Id tag”,
“Status”
]
},
{
“displayname”: “Leverancier Gegevens”,
“fields”: [
“Manufacturer Part Number”,
“Manufacturer”,
“Supplier”,
“Supplier Article Number”,
“Supplier Description”,
“Commodity Code”,
“Country of Origin”,
“Delivery Time in days”,
“Order Unit”,
“Price per Order Unit”
]
},
{
“displayname”: “Voorraad Informatie”,
“fields”: [
“Stock present”,
“Minimum Stock”,
“Expected Year Demand”,
“Storage Bin”,
“Type aanvraag”,
“Date Created”,
“Requester”,
“Order quantity”,
“Ordered via Ariba”,
“Opslaglocatie MSDS materiaal”,
“Nummer”
]
}
]
}
View when we enter via link
View when we enter from list
Any idea why there is difference in the way the JSON works? I would like to see it always as it is now via the link. Previously this was always the case.
We have a sharepoint list that is populated with different data. I have set some JSON on the body to make this data better to use and visible in one overview. However since a couple of weeks the JSON is not working correct anymore. At first I thought it wasnt working at all but then I noticed that it did give me the different sections I created, it just put all the data in list form an not in a nice overview. I tried different options on the settings but for some reason the format kept as is. There are some power flows attached and one of the flows sends an email with a link to an item from the list. The strange ting is that when we use the link, the JSON is completely working. So it appears that when we enter the item via the link the JSON is working, but when we enter the item via the list itself the JSON format is not working. JSON; {“sections”: [{“displayname”: “Algemene Materiaal Gegevens”,”fields”: [“UIN Code”,”UIN description”,”Material Group”,”UNSPC”,”Base UoM”,”Alternative UoM”,”Warehouse”,”Product Contact”,”MSDS Required”,”Stock Item”,”Repair”,”Reason of request”,”Maintenance Engineer”,”Floc”,”P&Id tag”,”Status”]},{“displayname”: “Leverancier Gegevens”,”fields”: [“Manufacturer Part Number”,”Manufacturer”,”Supplier”,”Supplier Article Number”,”Supplier Description”,”Commodity Code”,”Country of Origin”,”Delivery Time in days”,”Order Unit”,”Price per Order Unit”]},{“displayname”: “Voorraad Informatie”,”fields”: [“Stock present”,”Minimum Stock”,”Expected Year Demand”,”Storage Bin”,”Type aanvraag”,”Date Created”,”Requester”,”Order quantity”,”Ordered via Ariba”,”Opslaglocatie MSDS materiaal”,”Nummer”]}]} View when we enter via link View when we enter from listAny idea why there is difference in the way the JSON works? I would like to see it always as it is now via the link. Previously this was always the case. Read More
Visual Studio AI Toolkit : Building GenAI Applications
Level up your Generative AI development with Microsoft’s AI Toolkit! In the previous blog, we explored how AI Toolkit empowers you to run LLMs/SLMs locally.
AI toolkit lets us to,
Run pre-optimized AI models locally: Get started quickly with models designed for various setups, including Windows 11 running with DirectML acceleration or direct CPU, Linux with NVIDIA GPUs, or CPU-only environments.
Test and integrate models seamlessly: Experiment with models in a user-friendly playground or use a REST API to incorporate them directly into your application.
Fine-tune models for specific needs: Customize pre-trained models (like popular SLMs Phi-3 and Mistral) locally or in the cloud to enhance performance, tailor responses, and control their style.
Deploy your AI-powered features: Choose between cloud deployment or embedding them within your device applications.
Port Forwarding, a valuable feature within the AI Toolkit, serves as a crucial gateway for seamless communication with the GenAI model. Whether it’s through a straightforward API call or leveraging the SDKs, this functionality greatly enhances our ability to harness the power of the LLM/SLM. By enabling Port Forwarding, a plethora of new scenarios unfold, unlocking the full potential of our interactions with the model.
Port forwarding is like setting up a special path for data to travel between two devices over the internet. In the context of AI Toolkit, port forwarding involves configuring a pathway for communication between the LLM and external applications or systems, enabling seamless data exchange and interaction.
AI toolkit auto forwards the port to 5272 by default. If needed this can be modified or new ports can be added. This can be seen the moment we load the AI Toolkit extension on VS Code. A notification can be seen on the right side of the screen stating the “Your application running on port 5272 is available”
The 5272 is default port assigned by AI Toolkit, if we wish to add more ports, that can be done by navigating to the PORTS terminal on VS Code and then clicking on the “Add Ports” button.
In this, we can see the section “Forwarded address”. This is the address which will be used for communicating with the SLM. For this tutorial, Phi-3 will be used. This is a Small Language Model from Microsoft. It can be downloaded from the Model Catalog section.
Testing and comprehending the API Endpoint on POSTMAN:
Testing out an application from the API testing application, gives a clear knowledge about the API specification. This can also be done using the coding language, but for this demonstration, I will be showcasing it through the POSTMAN application. Postman application needs to be downloaded and installed on the local machine. Also sign up and create an account if you don’t have one. Once the application is launched, click on create a new request, it will be displayed as “+’” icon.
In order to do this testing, we would need some basic information about the API that we are testing. Details like Request method, Request URL, request body, type of the request body, authentication type are some of the mandatory things which will be needed. For the Visual Studio Code AI Toolkit API with Phi-3-mini-128k-cuda-int4-onnx model, the details are as follows,
Authentication: None
Request method: POST
Request URL:
http://127.0.0.1:5272/v1/chat/completions
Request Body type: Raw/JSON
Request Body:
{
“model”: “Phi-3-mini-128k-cuda-int4-onnx”,
“messages”: [
{
“role”: “user”,
“content”: “Hi”
}
],
“temperature”: 0.7,
“top_p”: 1,
“top_k”: 10,
“max_tokens”: 100,
“stream”: false
}
Authentication will be defaulting to None. So, it can be retained here to None as it is. HTTP request method here would be POST. The Request URL must be checked according to the Port that has been assigned, if it is the default port which is assigned by the visual studio ai toolkit, it will be 5272. Incase, it is changed to any other port, it must be changed in the URL as well. This will be a HTTP request and the URL must contain the URL address, followed by the port and the routes must be set as shown above.
The “model” parameter in the request body must have the same model that is being loaded in the VS Code AI Toolkit – playground section.
Request body type must be set to Raw and from the dropdown, select JSON.
The streaming parameter must be set to false, else the response will be streamed and divided into sub-classes of JSON which is hard to comprehend. In-case if the application needs streaming, then the parameter can be set to TRUE.
Once these are ready, use the Send button and wait for the response. If the API call is successful, 200 OK will be displayed and a response body will be visible in the Body tab.
NOTE: The VS Code AI Toolkit must be running in the background, and load the model in the playground before sending the API response in Postman.
The response of the above request is as follows,
Meanwhile, this will also be reflected on the VS Code AI toolkit Output window as well.
The response body carries quite a lot of information, the answer that will be used in the application is in the “content” division. The code must navigate to the first section of the choices then the messages and finally to the content section. In python, it can be done by,
chat_completion.choices[0].messages.content
This will be further used while building the playground application in the later part of this tutorial.
The response body also provides some more useful information like the ID, created stamp, finish reason to name a few. The role parameter shows that the response is from the model and hence shows “assistant”.
Code snippet can also be generated with the POSTMAN app, which can be then used for testing the API. Following is the example for the Python Code using the HTTP Client.
Using the VS Code AI Toolkit with Python:
from openai import OpenAI
client = OpenAI(
base_url=”http://127.0.0.1:5272/v1/”,
api_key=”xyz” # required by API but not used
)
chat_completion = client.chat.completions.create(
messages=[
{
“role”: “user”,”content”: “What is the capital of India?””,
}
],
model=”Phi-3-mini-128k-cuda-int4-onnx”,
)
print(chat_completion.choices[0].message.content)
The Python script interacts with an API (local instance of OpenAI’s API) to get a chat completion.
Step-by-step explanation of Python Implementation:
Importing the OpenAI library: The script starts by importing the OpenAI class from the openai package. This class is used to interact with OpenAI’s API. It can be installed using the simple python command as follows.
pip install openai
Although we are not communicating to the OpenAI’s API, we are utilizing the OS library to interact with the model running in local machine.
Create an OpenAI client instance: Initialize an OpenAI client with a base URL pointing to http://127.0.0.1:5272/v1/, which suggests the API is hosted locally rather than on OpenAI’s cloud servers. An API key, “XYZ”, is also provided, which is noted as required by the API but not used in this context. The API key can be set to anything and not necessarily XYZ.
Create a chat completion request: The script then creates a chat completion request using the chat.completions.create method of the client. This method is called with two parameters:
messages: A list containing a single message dictionary where the role is set to “user” and the content is the question ” What is the capital of India?”. This structure mimics a chat interaction where a user asks a question.
model: Specifies the model to use for generating the completion, in this case, “Phi-3-mini-128k-cuda-int4-onnx”. This indicates a specific model configuration.
Print the response: Finally, the script prints the content of the first message from the response’s choices. API returns a list of possible completions (choices), and it accesses the content of the message from the first choice to display the answer to the user’s question as demonstrated in the POSTMAN section.
Run the code in a new VS Code window. Preferably use a virtual environment to execute this code. Python is a prerequisite. To learn more, click here.
Execute the code using, the command
python <filename>.py #replace filename with the respective filename.
or click on the run button on the top right side of the Visual Studio Code window.
The response is now printed on the terminal.
Developing Basic Application:
Similarly, we can use it in applications as well. To demonstrate, lets build a basic application using Streamlit and python. Streamlit turns python scripts into shareable web apps in minutes. To know more click here. To install streamlit, use the following command in the python terminal/VS Code terminal.
pip install streamlit
The following script creates a web-based chat interface using streamlit where users can input queries, which are then sent to an AI model via a local OpenAI API server. The AI’s responses are displayed back in the chat interface, facilitating a conversational interaction.
import streamlit as st
from openai import OpenAI
client = OpenAI(
base_url=”http://127.0.0.1:5272/v1/”,
api_key=”xyz” # required by API but not used
)
st.title(“Chat with Phi-3”)
query = st.chat_input(“Enter query:”)
if query:
with st.chat_message(“user”):
st.write(query)
chat_completion = client.chat.completions.create(
messages=[
{“role”: “user”,”content”: “You are a helpful assistant and provides structured answers.”},
{“role”: “user”, “content”: query}
],
model=”Phi-3-mini-128k-cuda-int4-onnx”,
)
with st.chat_message(“assistant”):
st.write(chat_completion.choices[0].message.content)
Since the above code is using the streamlit, the startup command is also given in a different syntax, the command is as follows,
streamlit run <filename>.py #replace filename with the respective filename.
Upon successful execution, the streamlit pops up a new window in the Microsoft edge with the webpage.
This is how we can create GenAI applications by using the models running on local VS Code AI Toolkit environment. In the upcoming blog, lets see how to apply retrieval augmented generation using the AI Toolkit framework.
Microsoft Tech Community – Latest Blogs –Read More
What does the following error mean “Internal error while creating code interface description file: codeInfo.mat. Aborting code generation.”
Codegen is throwing the following error when doing code generation
"Internal error while creating code interface description file: codeInfo.mat. Aborting code generation.
Caused by:
Index exceeds the number of array elements. Index must not exceed 0. "
The error doesn’t give any more information and hence cannot figure out what is the problem. Using ert based target file.Codegen is throwing the following error when doing code generation
"Internal error while creating code interface description file: codeInfo.mat. Aborting code generation.
Caused by:
Index exceeds the number of array elements. Index must not exceed 0. "
The error doesn’t give any more information and hence cannot figure out what is the problem. Using ert based target file. Codegen is throwing the following error when doing code generation
"Internal error while creating code interface description file: codeInfo.mat. Aborting code generation.
Caused by:
Index exceeds the number of array elements. Index must not exceed 0. "
The error doesn’t give any more information and hence cannot figure out what is the problem. Using ert based target file. internal error MATLAB Answers — New Questions
How to deal with timing error when using Timer Function?
Hi everyone,
I am currently writing a code using Timer function to record mouse path with 1kHz rate (i.e. I want to record the position of the mouse every 0.001s). I discovered that the Timer function does not have the precision under 50ms. I also found that the matlab time outputs did not add up to the total trial running time (i.e. if I add up the time tracked by the Timer function, the value is always less than the actual running time of the mouse path). I think if the error in timing comes from the accumulation of excution time of the Timer function, in theory the value should be greater than the actual running time. I am hoping if anyone could take a look at my code and give me some suggestion, and am also wondering if people have encountered similar issue. The following is the Timer Function that I used:
timerObj = timer(‘TimerFcn’, @recordMousePos, ‘Period’, 0.001, ‘ExecutionMode’, ‘fixedRate’);
Thank you!Hi everyone,
I am currently writing a code using Timer function to record mouse path with 1kHz rate (i.e. I want to record the position of the mouse every 0.001s). I discovered that the Timer function does not have the precision under 50ms. I also found that the matlab time outputs did not add up to the total trial running time (i.e. if I add up the time tracked by the Timer function, the value is always less than the actual running time of the mouse path). I think if the error in timing comes from the accumulation of excution time of the Timer function, in theory the value should be greater than the actual running time. I am hoping if anyone could take a look at my code and give me some suggestion, and am also wondering if people have encountered similar issue. The following is the Timer Function that I used:
timerObj = timer(‘TimerFcn’, @recordMousePos, ‘Period’, 0.001, ‘ExecutionMode’, ‘fixedRate’);
Thank you! Hi everyone,
I am currently writing a code using Timer function to record mouse path with 1kHz rate (i.e. I want to record the position of the mouse every 0.001s). I discovered that the Timer function does not have the precision under 50ms. I also found that the matlab time outputs did not add up to the total trial running time (i.e. if I add up the time tracked by the Timer function, the value is always less than the actual running time of the mouse path). I think if the error in timing comes from the accumulation of excution time of the Timer function, in theory the value should be greater than the actual running time. I am hoping if anyone could take a look at my code and give me some suggestion, and am also wondering if people have encountered similar issue. The following is the Timer Function that I used:
timerObj = timer(‘TimerFcn’, @recordMousePos, ‘Period’, 0.001, ‘ExecutionMode’, ‘fixedRate’);
Thank you! timerfcn, timer MATLAB Answers — New Questions
How to generate simulink model from multiple verilog codes?
I am trying to generate simulink model from almost 175 verilog files. Using importhdl it always shows some error whether i try using folder, subfolder, path or filenames. Is there any way to fix it?I am trying to generate simulink model from almost 175 verilog files. Using importhdl it always shows some error whether i try using folder, subfolder, path or filenames. Is there any way to fix it? I am trying to generate simulink model from almost 175 verilog files. Using importhdl it always shows some error whether i try using folder, subfolder, path or filenames. Is there any way to fix it? simulink model, hdl, simulink generation MATLAB Answers — New Questions
Storage Migration Service – Procedure Clarifications
Hi all,
Some SMS procedure clarifications.
We have an old file server on a physical 2008R2 server that we have to migrate ASAP.
This old server uses also FSRM for quotas and screening and contains about 5.5TB in 5.5 Million files
The plan is to migrate it with SMS to server 2022 on our Hyper-V infrastructure.
Yesterday I’ve installed the SMS service on our management server (Server 2019) that runs the WAC service and started the inventory on the old File server.
My questions that need clarifications:
* Is there recommended maximum time i need to move between the 3 stages – Scan old server, Transfer Data, Cutoff?
* If I’ve done yesterday the scan on my old server and planning to do the transfer only in about 2 weeks – what about the files and permissions that will change in this period of time? do i need to run a new scan on my old server near the date I’m going to do the actual transfer? should i delete or clear something from the previous scan? (The scan took about 5 hours).
* I understand that the process of the transfer takes care of changes that were made to the scanned file from the last scan done.
* I plan to add the FSRM role to the new server and after the cutoff stage to import the FSRM settings from the old server to the new one using dirquota & filescrn – is that the way to do it? shall it work on server 2022 FSRM?
Appreciate any suggestions and clarification to the migration process.
Regards,
Hi all,Some SMS procedure clarifications.We have an old file server on a physical 2008R2 server that we have to migrate ASAP.This old server uses also FSRM for quotas and screening and contains about 5.5TB in 5.5 Million files The plan is to migrate it with SMS to server 2022 on our Hyper-V infrastructure. Yesterday I’ve installed the SMS service on our management server (Server 2019) that runs the WAC service and started the inventory on the old File server. My questions that need clarifications:* Is there recommended maximum time i need to move between the 3 stages – Scan old server, Transfer Data, Cutoff?* If I’ve done yesterday the scan on my old server and planning to do the transfer only in about 2 weeks – what about the files and permissions that will change in this period of time? do i need to run a new scan on my old server near the date I’m going to do the actual transfer? should i delete or clear something from the previous scan? (The scan took about 5 hours).* I understand that the process of the transfer takes care of changes that were made to the scanned file from the last scan done.* I plan to add the FSRM role to the new server and after the cutoff stage to import the FSRM settings from the old server to the new one using dirquota & filescrn – is that the way to do it? shall it work on server 2022 FSRM? Appreciate any suggestions and clarification to the migration process. Regards, Read More
Looking for a solution to build Mobile wallet App on Azure Stack across 6 regions
Hello Team,
We are seeking a solution from Microsoft to provide technical documentation and commercial details for building a mobile app across six regions using Azure Private Cloud. We require urgent assistance and consultation with the Azure technical team.
Hello Team,We are seeking a solution from Microsoft to provide technical documentation and commercial details for building a mobile app across six regions using Azure Private Cloud. We require urgent assistance and consultation with the Azure technical team. Read More
How to export RegressionEnsemble to ONNX.
exportONNXNetwork only works on neural nets. However ONNX has support for regression models as demonstrated here: https://onnx.ai/sklearn-onnx/auto_examples/plot_convert_model.html
Anyone have any ideas on a workflow to get our model(s) out of matlab for use in Triton? There will probably need to be an intermediary format.exportONNXNetwork only works on neural nets. However ONNX has support for regression models as demonstrated here: https://onnx.ai/sklearn-onnx/auto_examples/plot_convert_model.html
Anyone have any ideas on a workflow to get our model(s) out of matlab for use in Triton? There will probably need to be an intermediary format. exportONNXNetwork only works on neural nets. However ONNX has support for regression models as demonstrated here: https://onnx.ai/sklearn-onnx/auto_examples/plot_convert_model.html
Anyone have any ideas on a workflow to get our model(s) out of matlab for use in Triton? There will probably need to be an intermediary format. machine learning, export, python, onnx, regression MATLAB Answers — New Questions
issue in IP core generation
Failed ‘C:Usersaijaz_22011140OneDrive – Universiti Teknologi PETRONASDesktop’ contains white space in project path. Please try by removing white space from working directory.
this is the issue in my laptop trying to solve it but unable.Failed ‘C:Usersaijaz_22011140OneDrive – Universiti Teknologi PETRONASDesktop’ contains white space in project path. Please try by removing white space from working directory.
this is the issue in my laptop trying to solve it but unable. Failed ‘C:Usersaijaz_22011140OneDrive – Universiti Teknologi PETRONASDesktop’ contains white space in project path. Please try by removing white space from working directory.
this is the issue in my laptop trying to solve it but unable. matlab, fpga, xilinx, pid, controller MATLAB Answers — New Questions
Microsoft Outlook Bug – Auto Duplicated Appointment
Hi,
Not sure where to report this. or if there is a solution at the moment to delete all the appointment that i have done on specific time and day. because there are too many of it.
I try create an appointment on my outlook 365. not the Outlook that comes with windows.
What i did is that i create Microsoft teams meeting but with dual Meeting link in one appointment. Once i delete one of the link, then i proceed to create the appointment and send to my intended receiver.
Little did i know. it suddenly blast about 7k+ appointment.
Any way to delete it? or this is a bug?
Hi, Not sure where to report this. or if there is a solution at the moment to delete all the appointment that i have done on specific time and day. because there are too many of it. I try create an appointment on my outlook 365. not the Outlook that comes with windows.What i did is that i create Microsoft teams meeting but with dual Meeting link in one appointment. Once i delete one of the link, then i proceed to create the appointment and send to my intended receiver. Little did i know. it suddenly blast about 7k+ appointment. Any way to delete it? or this is a bug? Read More
OneDrive lost my files
Yesterday I lost all my files on OneDrive
I lost my connection to OneDrive for unknown reasons – it just happened. When I then tried to access my OneDrive files via the browser, I found that all my files were gone. I have tried to contact Microsoft and all I get is someone on the chat that can only help with problems from my end. This is not my problem or a problem with my device, it is problem at Microsoft end. I was advised to try this forum, so I hope you can give me some advice. Thanks
Yesterday I lost all my files on OneDriveI lost my connection to OneDrive for unknown reasons – it just happened. When I then tried to access my OneDrive files via the browser, I found that all my files were gone. I have tried to contact Microsoft and all I get is someone on the chat that can only help with problems from my end. This is not my problem or a problem with my device, it is problem at Microsoft end. I was advised to try this forum, so I hope you can give me some advice. Thanks Read More
Access Database Deployed to Sharepoint
I just got done bidding on an Access project that requires sending email via Outlook and generating some reports in Excel format. I think to myself “no problem, I know the VBA for doing that”. I win the contract and then the customer says: “When you are done, we will just put it on Sharepoint where everyone can use it.”
From what I can gather from the forums – Access on Sharepoint does not support VBA. Is there some other way I can send email via Outlook and generate Excel reports when the Access file is on Sharepoint?
I just got done bidding on an Access project that requires sending email via Outlook and generating some reports in Excel format. I think to myself “no problem, I know the VBA for doing that”. I win the contract and then the customer says: “When you are done, we will just put it on Sharepoint where everyone can use it.” From what I can gather from the forums – Access on Sharepoint does not support VBA. Is there some other way I can send email via Outlook and generate Excel reports when the Access file is on Sharepoint? Read More
Running WordPress from a Subfolder in Azure App Service – Not Working
I’m running into an issue with my Azure App Service deployment. I have a primary PHP site running smoothly from the root directory, but I’m having trouble getting a WordPress blog to work from a subfolder (e.g., https://mydomain.com/blog).
Despite ensuring that all WordPress files are correctly placed in the site/wwwroot/mysite/blog directory, accessing https://mydomain.com/blog results in a “Not Found” error.
Here are a few details about my setup:
The primary site is a custom PHP application.
The application is running on Linux.
The WordPress files are located in site/wwwroot/mysite/blog.
I’ve verified that the subfolder and its contents are accessible via FTP.
The main site and the blog use same databases.
I have set WORDPRESS_MULTISITE_CONVERT = true and WORDPRESS_MULTISITE_TYPE = subdirectory in the Environment Variables..htaccess file looks like this:# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /blog/
RewriteRule ^index.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]
</IfModule>
# END WordPress
Has anyone encountered a similar issue or have any insights on how to successfully run a WordPress site from a subfolder in Azure App Service? Any tips or configuration changes that I might be missing would be greatly appreciated!
I’m running into an issue with my Azure App Service deployment. I have a primary PHP site running smoothly from the root directory, but I’m having trouble getting a WordPress blog to work from a subfolder (e.g., https://mydomain.com/blog).Despite ensuring that all WordPress files are correctly placed in the site/wwwroot/mysite/blog directory, accessing https://mydomain.com/blog results in a “Not Found” error.Here are a few details about my setup:The primary site is a custom PHP application.The application is running on Linux.The WordPress files are located in site/wwwroot/mysite/blog.I’ve verified that the subfolder and its contents are accessible via FTP.The main site and the blog use same databases.I have set WORDPRESS_MULTISITE_CONVERT = true and WORDPRESS_MULTISITE_TYPE = subdirectory in the Environment Variables..htaccess file looks like this:# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /blog/
RewriteRule ^index.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]
</IfModule>
# END WordPress Has anyone encountered a similar issue or have any insights on how to successfully run a WordPress site from a subfolder in Azure App Service? Any tips or configuration changes that I might be missing would be greatly appreciated! Read More