Category: News
Onedrive continously signing out
Hi, I wondered if anyone can help….
Onedrive used to sync files from MS Teams Sharepoint like a dream until about 10 days ago.
Now the desktop client seems to drop connection constantly as though the network is droppoing or firewall is blocking things…. Before I started there, I thought I would ask if anyone else has had this recently….
Nothing I do makes OneDrive stay signed in….
I have already ensured my Dell 7310 is fully up to date on Bios and software.
Signed out of MS Teams, O365, Onedrive
Cleared cache from C:UsersusernameAppDataLocalOneDrive
Full shutdown, restart, login everywhere and OneDrive signs in sees 15 changes and signs out…..
I have even unlinked the laptop, done the above again, relinked the laptop, it syncs and then signs out…..
Has anyone got any ideas??
Hi, I wondered if anyone can help….Onedrive used to sync files from MS Teams Sharepoint like a dream until about 10 days ago. Now the desktop client seems to drop connection constantly as though the network is droppoing or firewall is blocking things…. Before I started there, I thought I would ask if anyone else has had this recently…. Nothing I do makes OneDrive stay signed in….I have already ensured my Dell 7310 is fully up to date on Bios and software.Signed out of MS Teams, O365, OnedriveCleared cache from C:UsersusernameAppDataLocalOneDriveFull shutdown, restart, login everywhere and OneDrive signs in sees 15 changes and signs out…..I have even unlinked the laptop, done the above again, relinked the laptop, it syncs and then signs out….. Has anyone got any ideas?? Read More
Handling Entity Data in Sentinel
So, I have set up some playbooks that allow me to add IPs/Domains/File Hashes to the MDE Indicators list, which is awesome to have and saves time when we need to block malicious entities. However, I have not found a great way for Sentinel to give me more information regarding File Hashes.
Really, my main worry with just a list of hashes in an incident is not knowing the file name for each hash, like so:
So, in this case, I am to just assume that both file hashes go to the ‘FileCoAuth’ file. Easy enough. But, are there ever cases where something like msedge.exe shows up in this list of file hashes? Right now, I feel like in this ‘Info’ tab, it might be more helpful to have ‘File Name’, but I might be looking at this all wrong.
I guess, I am just looking for some guidance into this entity so that I don’t accidentally block the wrong file and end up breaking systems.
Even if these hashes only ever correspond to the one file entity in the incident, I am still a bit confused at how little data comes over into this. Even for the File entity:
Great, I know the name of the file and the path.. However, over in Defender, I get TONS of info for the file, including all the hashes connected to it, First seen / last seen, basic VirusTotal info, and a bunch of other items. Am I expecting too much by hoping that we wouldn’t have to jump over to Defender? We set up Sentinel with the hopes of making it the go-to, but still find ourselves going right back to Defender for investigations and I wasn’t sure if there was something that I am missing in this setup, or if there was a way to get more data enrichment without having to pay VirusTotal’s insane bill (we are SMB and were quoted 90k per year, minimum). Even then, when Defender has some of the basic VirusTotal info, I was hoping Sentinel would have that and more..
So, I have set up some playbooks that allow me to add IPs/Domains/File Hashes to the MDE Indicators list, which is awesome to have and saves time when we need to block malicious entities. However, I have not found a great way for Sentinel to give me more information regarding File Hashes. Really, my main worry with just a list of hashes in an incident is not knowing the file name for each hash, like so:So, in this case, I am to just assume that both file hashes go to the ‘FileCoAuth’ file. Easy enough. But, are there ever cases where something like msedge.exe shows up in this list of file hashes? Right now, I feel like in this ‘Info’ tab, it might be more helpful to have ‘File Name’, but I might be looking at this all wrong.I guess, I am just looking for some guidance into this entity so that I don’t accidentally block the wrong file and end up breaking systems. Even if these hashes only ever correspond to the one file entity in the incident, I am still a bit confused at how little data comes over into this. Even for the File entity:Great, I know the name of the file and the path.. However, over in Defender, I get TONS of info for the file, including all the hashes connected to it, First seen / last seen, basic VirusTotal info, and a bunch of other items. Am I expecting too much by hoping that we wouldn’t have to jump over to Defender? We set up Sentinel with the hopes of making it the go-to, but still find ourselves going right back to Defender for investigations and I wasn’t sure if there was something that I am missing in this setup, or if there was a way to get more data enrichment without having to pay VirusTotal’s insane bill (we are SMB and were quoted 90k per year, minimum). Even then, when Defender has some of the basic VirusTotal info, I was hoping Sentinel would have that and more.. Read More
Outlook rule for selective forwarding of email received via distribution list
After trying rules based on different things, searching the internet, etc I can not get a rule to work for this scenario:
Inbound email received from a government entity, sent to our company’s distribution list. I am one of several in the distribution list and have no problem receiving the emails. What I would like to do is forward these emails to only a few people who are not part of the distro (and do not need to receive all email to the distro). The from email is like email address removed for privacy reasons. (keyword is the same before and after “@”). The email subjects are varied, so unable to use subject.
Everything I’ve tried (thus far) doesn’t cause the email to auto forward. I’ve tried the email contains keyword and sent to distro name. I’ve tried from email contains keyword and to email contains keyword. I’ve tried sent to user a or b or c (members of distro). I’m not sure what I’m missing to get it working. I have similar rules to move emails sent to another distro I’m part of to a specific folder, and that seems to work fine.
I am wondering if there is some larger setting at play with IT and our parent company. Not sure what I can check on as a regular user that might get this working.
Thanks in advance!
jt
After trying rules based on different things, searching the internet, etc I can not get a rule to work for this scenario: Inbound email received from a government entity, sent to our company’s distribution list. I am one of several in the distribution list and have no problem receiving the emails. What I would like to do is forward these emails to only a few people who are not part of the distro (and do not need to receive all email to the distro). The from email is like email address removed for privacy reasons. (keyword is the same before and after “@”). The email subjects are varied, so unable to use subject. Everything I’ve tried (thus far) doesn’t cause the email to auto forward. I’ve tried the email contains keyword and sent to distro name. I’ve tried from email contains keyword and to email contains keyword. I’ve tried sent to user a or b or c (members of distro). I’m not sure what I’m missing to get it working. I have similar rules to move emails sent to another distro I’m part of to a specific folder, and that seems to work fine. I am wondering if there is some larger setting at play with IT and our parent company. Not sure what I can check on as a regular user that might get this working. Thanks in advance!jt Read More
Excel 2021. Double sided printing throws left and right margins off. No gutter setting.
Hello, I upgraded to Office 2021 Plus a few months ago from Office 2019. Ever since, when printing double sided documents, the front page is moved to the left with very little margin on the left and too much on the right. The flip side is exactly opposite with very little margin on the right and too much on the left. These pages are printed in landscape and flipped on long edges. Now they were set up and printed perfect in Office 2019, and only shifted when trying to print them through Office 2021 Plus.
I have searched the internet and have found many solutions to others having this issue by adjusting the “gutter” which as far as I can tell is supposed to be under the Page Setup/Margins tab. Unfortunately, my margins tab is not showing a gutter setting, nor is any tab under page setup. If there is someone that can help me with this issue, I would be very grateful.
Hello, I upgraded to Office 2021 Plus a few months ago from Office 2019. Ever since, when printing double sided documents, the front page is moved to the left with very little margin on the left and too much on the right. The flip side is exactly opposite with very little margin on the right and too much on the left. These pages are printed in landscape and flipped on long edges. Now they were set up and printed perfect in Office 2019, and only shifted when trying to print them through Office 2021 Plus.I have searched the internet and have found many solutions to others having this issue by adjusting the “gutter” which as far as I can tell is supposed to be under the Page Setup/Margins tab. Unfortunately, my margins tab is not showing a gutter setting, nor is any tab under page setup. If there is someone that can help me with this issue, I would be very grateful. Read More
Sharepoint tenant out of space
Hello,
Our Sharepoint tenant is out of space, see below:
But everything works at the moment, we can edit, upload files to Sharepoint sites. Why does it do it? If we are still able to do it, is it because its using the space from the recycle bin? Or deleted the version history by its own?
Best
Hello, Our Sharepoint tenant is out of space, see below:But everything works at the moment, we can edit, upload files to Sharepoint sites. Why does it do it? If we are still able to do it, is it because its using the space from the recycle bin? Or deleted the version history by its own? Best Read More
NIS2 Trainings May 2- 9, 2024
Dear Microsoft Partners,
The Network and Information Systems Directive 2 (NIS2) represents the European Union’s latest stride in bolstering cybersecurity across Member States, coming into effect in October 2024. As digital threats evolve, NIS2 represents a unique opportunity to help customers improve their cybersecurity posture now.
Our mission is to equip you with the knowledge and tools necessary for helping your customers not just meet NIS2 requirements but exceed them. We look forward to your participation.
Microsoft and our training partner Fast Lane partnered up for an exclusive webinar series designed to demystify the NIS2 Directive for you as our Partner and how you can use it as an opportunity to help customers improve their cybersecurity health.
The webinar series will run in English, German, French, Spanish and Italian between May 2nd and May 9th.
Agenda
Each 2-hour webinar will offer insight into:
The NIS2 Directive, with essential information across its legal interpretation on local country level
How the Microsoft platform can support customers meet NIS2 expectations
Guidance on how you as a Microsoft security partner can build a sales offer or solution adressing NIS2
At the end of each webinar, you will receive a Microsoft-commissioned playbook how to build your sales offer with NIS2 and evolve your practice with Microsoft Security.
Who should attend this event?
The webinars are suitable for any Microsoft partner, both sales and technical professionals with Microsoft Security background (any level)
Register today
Microsoft Tech Community – Latest Blogs –Read More
What is the difference between rcosdesign and fdesign.pulseshaping?
The <https://www.mathworks.com/help/signal/release-notes.html?searchHighlight=firrcos&s_tid=doc_srchtitle release notes on the Signal Processing Toolbox> say that firrcos and fdesign.pulseshaping are deprecated by rcosdesign, and a little bit farther it gives specific code for converting from fdesign.pulseshaping to rcosdesign. Specifically, it says:
n1n = rcosdesign(Beta,span,sps);
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is equivalent to:
sps = 6;
span = 4;
Beta = 0.25;
f1 = fdesign.pulseshaping(sps, …
‘Square Root Raised Cosine’, …
‘Nsym,Beta’,span,Beta);
d1 = design(f1);
n1 = d1.Numerator
which appears to be true. However, can anyone explain what the
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is doing? In other words, what is the difference between these two functions such that some connector code is needed to go from one to the other? From a rough understanding, it looks like it is normalizing the values, but then not really sure on the rest.The <https://www.mathworks.com/help/signal/release-notes.html?searchHighlight=firrcos&s_tid=doc_srchtitle release notes on the Signal Processing Toolbox> say that firrcos and fdesign.pulseshaping are deprecated by rcosdesign, and a little bit farther it gives specific code for converting from fdesign.pulseshaping to rcosdesign. Specifically, it says:
n1n = rcosdesign(Beta,span,sps);
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is equivalent to:
sps = 6;
span = 4;
Beta = 0.25;
f1 = fdesign.pulseshaping(sps, …
‘Square Root Raised Cosine’, …
‘Nsym,Beta’,span,Beta);
d1 = design(f1);
n1 = d1.Numerator
which appears to be true. However, can anyone explain what the
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is doing? In other words, what is the difference between these two functions such that some connector code is needed to go from one to the other? From a rough understanding, it looks like it is normalizing the values, but then not really sure on the rest. The <https://www.mathworks.com/help/signal/release-notes.html?searchHighlight=firrcos&s_tid=doc_srchtitle release notes on the Signal Processing Toolbox> say that firrcos and fdesign.pulseshaping are deprecated by rcosdesign, and a little bit farther it gives specific code for converting from fdesign.pulseshaping to rcosdesign. Specifically, it says:
n1n = rcosdesign(Beta,span,sps);
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is equivalent to:
sps = 6;
span = 4;
Beta = 0.25;
f1 = fdesign.pulseshaping(sps, …
‘Square Root Raised Cosine’, …
‘Nsym,Beta’,span,Beta);
d1 = design(f1);
n1 = d1.Numerator
which appears to be true. However, can anyone explain what the
n1n = n1n / max(n1n) * (-1/(pi*sps) …
* (pi*(Beta-1) – 4*Beta))
is doing? In other words, what is the difference between these two functions such that some connector code is needed to go from one to the other? From a rough understanding, it looks like it is normalizing the values, but then not really sure on the rest. fir, filter, dsp MATLAB Answers — New Questions
applying the Parsen Window using a specific frequency for the width estimation
Hello, I would like to estimate Parsen window of 0.1Hz This value is used to smooth the Fourier Amplitude. However, I realized that it provides a higher smoothing effect that can alter the amplitudes compared to the unmoothed FAS of the signal (attached as txt file). I found out the following to estimate the Parsen window:
signal=load(signal.txt);
window_width_Parsen=0.1; % 0.1Hz suggested that does not alter the amplitudes compared to the unmoothed FAS
fs: 200 % Sample frequency of the signal and the FAS
w_Parsen=(1/window_width_Parsen)*fs;
w_Parsen=round(w_Parsen,0);Hello, I would like to estimate Parsen window of 0.1Hz This value is used to smooth the Fourier Amplitude. However, I realized that it provides a higher smoothing effect that can alter the amplitudes compared to the unmoothed FAS of the signal (attached as txt file). I found out the following to estimate the Parsen window:
signal=load(signal.txt);
window_width_Parsen=0.1; % 0.1Hz suggested that does not alter the amplitudes compared to the unmoothed FAS
fs: 200 % Sample frequency of the signal and the FAS
w_Parsen=(1/window_width_Parsen)*fs;
w_Parsen=round(w_Parsen,0); Hello, I would like to estimate Parsen window of 0.1Hz This value is used to smooth the Fourier Amplitude. However, I realized that it provides a higher smoothing effect that can alter the amplitudes compared to the unmoothed FAS of the signal (attached as txt file). I found out the following to estimate the Parsen window:
signal=load(signal.txt);
window_width_Parsen=0.1; % 0.1Hz suggested that does not alter the amplitudes compared to the unmoothed FAS
fs: 200 % Sample frequency of the signal and the FAS
w_Parsen=(1/window_width_Parsen)*fs;
w_Parsen=round(w_Parsen,0); parsen window MATLAB Answers — New Questions
Trying to ubderstand the power distribution in fft plot
clear all
close all
clc
L=10;
n=1.45;
c=2.9979e8;
dt = 6e-12;
T=10*2*L*n/c;
eps0=8.854e-12;
A=80e-12;
t = (-T/2/dt:1:T/2/dt)*dt;
Nt=round(T/dt);
fsine = 1e9;
vsine = 1;
phi = vsine*sin(2*pi*fsine*t);
EL1t=1.274e7*exp(1i*phi);
FP=fft(phi);
fs=1/dt/Nt;
Fs=(-1/dt/2:fs:1/dt/2-1);
figure
Z=plot(Fs,fftshift(abs(fft(EL1t/Nt).^2*2*n*c*eps0*A)));
As you see from the obtained fft plot , the peak of the graph at 0Hz is around 60W , but I am struggling to understand how the power is distributed throughout the plot.
The given input power is 100W I think. Shouldn’t the central frequency at 0Hz be around 100W.
Where did the rest of power is what I am not understanding…clear all
close all
clc
L=10;
n=1.45;
c=2.9979e8;
dt = 6e-12;
T=10*2*L*n/c;
eps0=8.854e-12;
A=80e-12;
t = (-T/2/dt:1:T/2/dt)*dt;
Nt=round(T/dt);
fsine = 1e9;
vsine = 1;
phi = vsine*sin(2*pi*fsine*t);
EL1t=1.274e7*exp(1i*phi);
FP=fft(phi);
fs=1/dt/Nt;
Fs=(-1/dt/2:fs:1/dt/2-1);
figure
Z=plot(Fs,fftshift(abs(fft(EL1t/Nt).^2*2*n*c*eps0*A)));
As you see from the obtained fft plot , the peak of the graph at 0Hz is around 60W , but I am struggling to understand how the power is distributed throughout the plot.
The given input power is 100W I think. Shouldn’t the central frequency at 0Hz be around 100W.
Where did the rest of power is what I am not understanding… clear all
close all
clc
L=10;
n=1.45;
c=2.9979e8;
dt = 6e-12;
T=10*2*L*n/c;
eps0=8.854e-12;
A=80e-12;
t = (-T/2/dt:1:T/2/dt)*dt;
Nt=round(T/dt);
fsine = 1e9;
vsine = 1;
phi = vsine*sin(2*pi*fsine*t);
EL1t=1.274e7*exp(1i*phi);
FP=fft(phi);
fs=1/dt/Nt;
Fs=(-1/dt/2:fs:1/dt/2-1);
figure
Z=plot(Fs,fftshift(abs(fft(EL1t/Nt).^2*2*n*c*eps0*A)));
As you see from the obtained fft plot , the peak of the graph at 0Hz is around 60W , but I am struggling to understand how the power is distributed throughout the plot.
The given input power is 100W I think. Shouldn’t the central frequency at 0Hz be around 100W.
Where did the rest of power is what I am not understanding… fft, plot, power, psd MATLAB Answers — New Questions
Rewriting in columns of Excel sheet
%% After running the code for p1 = 0.01; p2 = 0.0; p3 = 0.0;
%Matlab writes the calculations of ‘Cf’ and ‘Nu’ in an excel sheet of Columns ‘I’ and ‘J’ respectively.
%% But I want to run the code 03 times with different values (i) p1 = 0.01; p2 = 0.0; p3 = 0.0;
% (ii) p1 = 0.01; p2 = 0.01; p3 = 0.0;(iii)p1 = 0.01;p2 = 0.01;p3 = 0.01;
% (other values are fixed as in Excel sheet)
%% Now I want Matlab to write the values of ‘Cf’ in the Columns " I, J, K " and the values of ‘Nu’ in the Columns " L, M, N" respectively. (SAME EXCEL SHEET)
%% Here is my try, please modify
status = mkdir(‘D:PK79’); cd D:PK79
filePath = ‘D:PK79ADM3A.xlsx’; filename = ‘ADM3A.xlsx’;
d = readtable(filename);
T = fillmissing(d,"previous");
p1 = 0.01; p2 = 0.0; p3 = 0.0;
K = T.K; M = T.M; Pr = T.Pr; Ec = T.Ec; Q = T.Qe; D = T.D; b = T.b; Bi = T.Bi;
Cp = 1;rf = 7;kf = 0.6;sf = 5.5; C1 = 7;rhos1 = 1;k1 = 4;s1 = 1;
C2 = 5;r2 = 2;k2 = .5;s2 = 2.7; C3 = 6.2;r3 = 2;k3 = .9;s3 = 6.2;
H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rf ) + p2*r2/rf ) + p3*r3/rf;
H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*C1/(rf*Cp)) + p2*r2*C2/(rf*Cp) ) + p3*r3*C3/(rf*Cp);
C2 = ( (s1+2*sf-2*p1*(sf-s1))/(s1+2*sf+p1*(sf-s1)));
C3 = ( (s2+2*C2-2*p2*(C2-s2))/(s2+2*C2+p2*(C2-s2)) );
A3 = ( (s3+2*C3-2*p3*(C3-s3))/(s3+2*C3+p3*(C3-s3)) );
B1 = ( (k1+2*kf-2*p1*(kf-k1))/(k1+2*kf+p1*(kf-k1)) );
B2 = ( (k2+2*B1-2*p2*(B1-k2))/(k2+2*B1+p2*(B1-k2)) );
H4 = ( (k3+2*B2-2*p3*(B2-k3))/(k3+2*B2+p3*(B2-k3)) );
N = size(T,1); Cf = zeros(N,1); Nu = zeros(N,1);
for k = 1:N
ODE = @(x,y)[y(2); y(3); y(4);
M(k)*(x+K(k)).^2*(A3/H1).*(y(2) + (x+K(k)).*y(3)) – 2*y(4)./(x+K(k)) + y(3)./(x+K(k)).^2 – y(2)./(x+K(k)).^3 – (H2/H1)*K(k)*((x+K(k)).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K(k)).*(y(1)*y(3)-y(2)^2);
y(6); – (Pr(k)/H4)*( Q(k)*(y(5) + exp(-D(k)*x)) + H3*K(k)*y(1)*y(6) + M(k)*Ec(k)*A3*y(2)^2 ) – y(6) ];
BC = @(ya,yb)[ya(1); ya(2)-1-b(k)*(ya(3)-ya(2)/K(k)); ya(6)-Bi(k)*(ya(5)-1); yb([2;3;5])];
xa = 0; xb = 6;
x = linspace(xa,xb,101);
solinit = bvpinit(x,[0 1 0 1 0 1]);
sol = bvp5c(ODE,BC,solinit);
S = deval(sol,x);
Cf(k) = H1*( S(3,1) – S(2,1)/K(k));
Nu(k) = -H4*S(6,1);
end
T.Cf = Cf; T.Nu = Nu;
vars = T.Properties.VariableNames;
T = removevars(T,vars(startsWith(vars,’Var’)));
writetable(T,filename,’WriteMode’,’overwritesheet’)
T = readtable(filename) % check the result:%% After running the code for p1 = 0.01; p2 = 0.0; p3 = 0.0;
%Matlab writes the calculations of ‘Cf’ and ‘Nu’ in an excel sheet of Columns ‘I’ and ‘J’ respectively.
%% But I want to run the code 03 times with different values (i) p1 = 0.01; p2 = 0.0; p3 = 0.0;
% (ii) p1 = 0.01; p2 = 0.01; p3 = 0.0;(iii)p1 = 0.01;p2 = 0.01;p3 = 0.01;
% (other values are fixed as in Excel sheet)
%% Now I want Matlab to write the values of ‘Cf’ in the Columns " I, J, K " and the values of ‘Nu’ in the Columns " L, M, N" respectively. (SAME EXCEL SHEET)
%% Here is my try, please modify
status = mkdir(‘D:PK79’); cd D:PK79
filePath = ‘D:PK79ADM3A.xlsx’; filename = ‘ADM3A.xlsx’;
d = readtable(filename);
T = fillmissing(d,"previous");
p1 = 0.01; p2 = 0.0; p3 = 0.0;
K = T.K; M = T.M; Pr = T.Pr; Ec = T.Ec; Q = T.Qe; D = T.D; b = T.b; Bi = T.Bi;
Cp = 1;rf = 7;kf = 0.6;sf = 5.5; C1 = 7;rhos1 = 1;k1 = 4;s1 = 1;
C2 = 5;r2 = 2;k2 = .5;s2 = 2.7; C3 = 6.2;r3 = 2;k3 = .9;s3 = 6.2;
H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rf ) + p2*r2/rf ) + p3*r3/rf;
H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*C1/(rf*Cp)) + p2*r2*C2/(rf*Cp) ) + p3*r3*C3/(rf*Cp);
C2 = ( (s1+2*sf-2*p1*(sf-s1))/(s1+2*sf+p1*(sf-s1)));
C3 = ( (s2+2*C2-2*p2*(C2-s2))/(s2+2*C2+p2*(C2-s2)) );
A3 = ( (s3+2*C3-2*p3*(C3-s3))/(s3+2*C3+p3*(C3-s3)) );
B1 = ( (k1+2*kf-2*p1*(kf-k1))/(k1+2*kf+p1*(kf-k1)) );
B2 = ( (k2+2*B1-2*p2*(B1-k2))/(k2+2*B1+p2*(B1-k2)) );
H4 = ( (k3+2*B2-2*p3*(B2-k3))/(k3+2*B2+p3*(B2-k3)) );
N = size(T,1); Cf = zeros(N,1); Nu = zeros(N,1);
for k = 1:N
ODE = @(x,y)[y(2); y(3); y(4);
M(k)*(x+K(k)).^2*(A3/H1).*(y(2) + (x+K(k)).*y(3)) – 2*y(4)./(x+K(k)) + y(3)./(x+K(k)).^2 – y(2)./(x+K(k)).^3 – (H2/H1)*K(k)*((x+K(k)).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K(k)).*(y(1)*y(3)-y(2)^2);
y(6); – (Pr(k)/H4)*( Q(k)*(y(5) + exp(-D(k)*x)) + H3*K(k)*y(1)*y(6) + M(k)*Ec(k)*A3*y(2)^2 ) – y(6) ];
BC = @(ya,yb)[ya(1); ya(2)-1-b(k)*(ya(3)-ya(2)/K(k)); ya(6)-Bi(k)*(ya(5)-1); yb([2;3;5])];
xa = 0; xb = 6;
x = linspace(xa,xb,101);
solinit = bvpinit(x,[0 1 0 1 0 1]);
sol = bvp5c(ODE,BC,solinit);
S = deval(sol,x);
Cf(k) = H1*( S(3,1) – S(2,1)/K(k));
Nu(k) = -H4*S(6,1);
end
T.Cf = Cf; T.Nu = Nu;
vars = T.Properties.VariableNames;
T = removevars(T,vars(startsWith(vars,’Var’)));
writetable(T,filename,’WriteMode’,’overwritesheet’)
T = readtable(filename) % check the result: %% After running the code for p1 = 0.01; p2 = 0.0; p3 = 0.0;
%Matlab writes the calculations of ‘Cf’ and ‘Nu’ in an excel sheet of Columns ‘I’ and ‘J’ respectively.
%% But I want to run the code 03 times with different values (i) p1 = 0.01; p2 = 0.0; p3 = 0.0;
% (ii) p1 = 0.01; p2 = 0.01; p3 = 0.0;(iii)p1 = 0.01;p2 = 0.01;p3 = 0.01;
% (other values are fixed as in Excel sheet)
%% Now I want Matlab to write the values of ‘Cf’ in the Columns " I, J, K " and the values of ‘Nu’ in the Columns " L, M, N" respectively. (SAME EXCEL SHEET)
%% Here is my try, please modify
status = mkdir(‘D:PK79’); cd D:PK79
filePath = ‘D:PK79ADM3A.xlsx’; filename = ‘ADM3A.xlsx’;
d = readtable(filename);
T = fillmissing(d,"previous");
p1 = 0.01; p2 = 0.0; p3 = 0.0;
K = T.K; M = T.M; Pr = T.Pr; Ec = T.Ec; Q = T.Qe; D = T.D; b = T.b; Bi = T.Bi;
Cp = 1;rf = 7;kf = 0.6;sf = 5.5; C1 = 7;rhos1 = 1;k1 = 4;s1 = 1;
C2 = 5;r2 = 2;k2 = .5;s2 = 2.7; C3 = 6.2;r3 = 2;k3 = .9;s3 = 6.2;
H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rf ) + p2*r2/rf ) + p3*r3/rf;
H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*C1/(rf*Cp)) + p2*r2*C2/(rf*Cp) ) + p3*r3*C3/(rf*Cp);
C2 = ( (s1+2*sf-2*p1*(sf-s1))/(s1+2*sf+p1*(sf-s1)));
C3 = ( (s2+2*C2-2*p2*(C2-s2))/(s2+2*C2+p2*(C2-s2)) );
A3 = ( (s3+2*C3-2*p3*(C3-s3))/(s3+2*C3+p3*(C3-s3)) );
B1 = ( (k1+2*kf-2*p1*(kf-k1))/(k1+2*kf+p1*(kf-k1)) );
B2 = ( (k2+2*B1-2*p2*(B1-k2))/(k2+2*B1+p2*(B1-k2)) );
H4 = ( (k3+2*B2-2*p3*(B2-k3))/(k3+2*B2+p3*(B2-k3)) );
N = size(T,1); Cf = zeros(N,1); Nu = zeros(N,1);
for k = 1:N
ODE = @(x,y)[y(2); y(3); y(4);
M(k)*(x+K(k)).^2*(A3/H1).*(y(2) + (x+K(k)).*y(3)) – 2*y(4)./(x+K(k)) + y(3)./(x+K(k)).^2 – y(2)./(x+K(k)).^3 – (H2/H1)*K(k)*((x+K(k)).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K(k)).*(y(1)*y(3)-y(2)^2);
y(6); – (Pr(k)/H4)*( Q(k)*(y(5) + exp(-D(k)*x)) + H3*K(k)*y(1)*y(6) + M(k)*Ec(k)*A3*y(2)^2 ) – y(6) ];
BC = @(ya,yb)[ya(1); ya(2)-1-b(k)*(ya(3)-ya(2)/K(k)); ya(6)-Bi(k)*(ya(5)-1); yb([2;3;5])];
xa = 0; xb = 6;
x = linspace(xa,xb,101);
solinit = bvpinit(x,[0 1 0 1 0 1]);
sol = bvp5c(ODE,BC,solinit);
S = deval(sol,x);
Cf(k) = H1*( S(3,1) – S(2,1)/K(k));
Nu(k) = -H4*S(6,1);
end
T.Cf = Cf; T.Nu = Nu;
vars = T.Properties.VariableNames;
T = removevars(T,vars(startsWith(vars,’Var’)));
writetable(T,filename,’WriteMode’,’overwritesheet’)
T = readtable(filename) % check the result: rewriting in columns of excel sheet MATLAB Answers — New Questions
Setting color to certain data in 3d plot, keeping rest of data following colormap
I am plotting data in a 3d heatmap using the bar3 command. I’d like to set some of the data to a certain color that is not included in the colorscheme of the legend bar, while keeping the rest of the data following the color map. The code I’m currently using is:
b=bar3(error);
set(gca,’XTickLabel’,[2.5 5 7.5 10])
set(gca,’YTickLabel’,[0 50 100 150 200 250])
ylim([0 20.5]);
colorbar
colormap turbo;
caxis([0 100]);
for k = 1:length(b)
zdata = b(k).ZData;
b(k).CData = zdata;
b(k).FaceColor = ‘interp’;
endI am plotting data in a 3d heatmap using the bar3 command. I’d like to set some of the data to a certain color that is not included in the colorscheme of the legend bar, while keeping the rest of the data following the color map. The code I’m currently using is:
b=bar3(error);
set(gca,’XTickLabel’,[2.5 5 7.5 10])
set(gca,’YTickLabel’,[0 50 100 150 200 250])
ylim([0 20.5]);
colorbar
colormap turbo;
caxis([0 100]);
for k = 1:length(b)
zdata = b(k).ZData;
b(k).CData = zdata;
b(k).FaceColor = ‘interp’;
end I am plotting data in a 3d heatmap using the bar3 command. I’d like to set some of the data to a certain color that is not included in the colorscheme of the legend bar, while keeping the rest of the data following the color map. The code I’m currently using is:
b=bar3(error);
set(gca,’XTickLabel’,[2.5 5 7.5 10])
set(gca,’YTickLabel’,[0 50 100 150 200 250])
ylim([0 20.5]);
colorbar
colormap turbo;
caxis([0 100]);
for k = 1:length(b)
zdata = b(k).ZData;
b(k).CData = zdata;
b(k).FaceColor = ‘interp’;
end bar3, bar color MATLAB Answers — New Questions
Tech Talks Presents: Power Pages Data Controls & External Data Connectivity | May 2nd
Join us on Thursday, May 2nd at 8am PT as Pranita Padalwar, Sr Product Manager presents Power Pages Data Controls & External Data Connectivity.
We hope you’ll join us!
Call to Action:
Click on the link to save the calendar invite: https://aka.ms/TechTalksInvite
View past recordings (sign in required): https://aka.ms/TechTalksRecording
Get started with the adoption tools here
Join us on Thursday, May 2nd at 8am PT as Pranita Padalwar, Sr Product Manager presents Power Pages Data Controls & External Data Connectivity.
We hope you’ll join us!
Call to Action:
Click on the link to save the calendar invite: https://aka.ms/TechTalksInvite
View past recordings (sign in required): https://aka.ms/TechTalksRecording
Get started with the adoption tools here Read More
Several Issues with Microsoft Bookings – Need Assistance
Hello everyone,
I’m new to Microsoft Bookings and encountering a few issues I hope the community can help me resolve.
1. Error on Booking Page:
No matter which access setting I choose (Available to anyone, No self-service, Available to people in your org), I keep receiving an error message on the booking page: “We aren’t offering services through the booking page right now. Please contact us directly or check back here later.” I’ve attached an image of the error below.
2. Viewing Staff Calendars on a Weekly/Monthly Basis:
Currently, I can sort the calendar by “day by staff,” which works well. However, I’m looking to view staff availability on a weekly or monthly basis. Is there a feature that allows this, or is it not possible within Bookings?
3. Address Field Not Showing Full Details in Outlook:
When creating an appointment, there’s a field for a Google address that is searchable and appears complete on Bookings. But, when I check the appointment notes in Outlook, it only shows the street address, not the full details. I will upload an image for clarity.
4. Rescheduling Appointments in Outlook:
We schedule appointments for our sales team, and these appointments sync with their Outlook calendars, which is fantastic. However, they are unable to drag and reschedule these appointments themselves. Must they access Bookings to change their schedules, or is there another way?
I appreciate any guidance or solutions you might offer!
Thank you!
Hello everyone,I’m new to Microsoft Bookings and encountering a few issues I hope the community can help me resolve.1. Error on Booking Page:No matter which access setting I choose (Available to anyone, No self-service, Available to people in your org), I keep receiving an error message on the booking page: “We aren’t offering services through the booking page right now. Please contact us directly or check back here later.” I’ve attached an image of the error below.2. Viewing Staff Calendars on a Weekly/Monthly Basis:Currently, I can sort the calendar by “day by staff,” which works well. However, I’m looking to view staff availability on a weekly or monthly basis. Is there a feature that allows this, or is it not possible within Bookings?3. Address Field Not Showing Full Details in Outlook:When creating an appointment, there’s a field for a Google address that is searchable and appears complete on Bookings. But, when I check the appointment notes in Outlook, it only shows the street address, not the full details. I will upload an image for clarity.4. Rescheduling Appointments in Outlook:We schedule appointments for our sales team, and these appointments sync with their Outlook calendars, which is fantastic. However, they are unable to drag and reschedule these appointments themselves. Must they access Bookings to change their schedules, or is there another way?I appreciate any guidance or solutions you might offer!Thank you! Read More
Data is Null. The method or property cannot be call on null values. Exception
In my application multiple users request the API and in the API I am calling the stored Procedure via Entity Framework but some times getting exception Data Is Null. The Method or Property cannot be call on null values. This is not consistent if 4-5 users are clicking on submit button to call api then api executing stored procedure then for some users exception coming.
This is my main procedure
ALTER PROCEDURE [dbo].[sp_SaveDcumentAndParties](
@Action VARCHAR(20),
@partiesHistoryJson NVARCHAR(MAX)=NULL,
@exportExtraHistoryJson NVARCHAR(MAX)=NULL,
@FieldSettingsJson NVARCHAR(MAX)=NULL,
@CodingSessionDetails NVARCHAR(MAX)=NULL,
@DocumentID VARCHAR(50) = NULL,
@ProjectId VARCHAR(50) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Estimated NVARCHAR(255) = NULL,
@Title NVARCHAR(4000) = NULL,
@CodingQATime INT = NULL,
@IsCorrected INT = NULL,
@UserTask VARCHAR(10) = NULL,
@isHistoryDocument Bit,
@ReturnJSONResult NVARCHAR(MAX) =NULL
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
IF @Action = ‘Save’
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
BEGIN TRY
— Deadlock avoidance mechanism
SET DEADLOCK_PRIORITY LOW;
DECLARE @IsAddPartiesAndPartiesHistory BIT;
DECLARE @IsAddCodedData BIT;
DECLARE @IsUpdateImportPages BIT;
DECLARE @IsUpdateAdminRegeTitle BIT;
DECLARE @IsUpdateExportExtras BIT;
DECLARE @IsInsUpdFieldAdminValidations BIT;
DECLARE @IsExecCodingSessionDetails BIT;
SAVE TRANSACTION MySavepoint; — Savepoint before inner procedure call
EXEC AddPartiesAndPartiesHistory @partiesHistoryJson,@exportExtraHistoryJson,
@DocumentID,@IsReturn=@IsAddPartiesAndPartiesHistory OUTPUT;
EXEC AddCodedData @DocumentID,@ProjectId, @DocumentDate,@DocumentType,@EnteredById,
@Estimated,@Title,@CodingQATime,@IsCorrected,@UserTask,@IsReturn=@IsAddCodedData
OUTPUT;
EXEC UpdateImportPages @DocumentID,@DocumentType,
@EnteredById,@UserTask,@IsReturn=@IsUpdateImportPages OUTPUT;
EXEC UpdateAdminRegeTitle @DocumentID,@DocumentType,
@EnteredById,@Title,@DocumentDate,@Estimated,@UserTask,@IsReturn=@IsUpdateAdminRegeTitle
OUTPUT;
If @UserTask=‘Coder’
Begin
EXEC UpdateExportExtras @DocumentID,@IsReturn=@IsUpdateExportExtras OUTPUT;
EXEC InsUpdFieldAdminValidations
@DocumentID,@FieldSettingsJson,@IsReturn=@IsInsUpdFieldAdminValidations OUTPUT;
END;
DECLARE @DocOrgID INT = (SELECT Id + 1 FROM ImportPages Where
Document_ID=@DocumentID);
DECLARE @NextDocumentId NVARCHAR(100);
DECLARE @NextId INT = 0;
DECLARE @IsReturnFlag BIT = 0;
DECLARE @CodeCompletedCount INT = 0;
DECLARE @QaCompletedCount INT = 0;
DECLARE @TotalDocCount INT = 0;
If @UserTask=‘QA’
BEGIN
EXEC ExecCodingSessionDetails
@DocumentID,@CodingSessionDetails,@IsReturn=@IsExecCodingSessionDetails OUTPUT;
— Retrieve total document count
SELECT @TotalDocCount = COUNT(Id) FROM ImportPages;
SET @Description=‘Total Document – ‘+@TotalDocCount;
— Retrieve QA completed document count
SELECT @QaCompletedCount = COUNT(Id) FROM ImportPages WHERE Coded = 1 AND
Revision = 1;
SET @Description=CONCAT(@Description,‘ / Total QA Doc ompleted –
‘+@QaCompletedCount);
END
ELSE
BEGIN
— Retrieve total document count and code completed document count
SELECT @TotalDocCount = COUNT(Id), @CodeCompletedCount = SUM(CASE WHEN Coded = 1
THEN 1 ELSE 0 END)
FROM ImportPages;
SET @Description=CONCAT(@Description,‘ / Total Coded Doc ompleted –
‘+@CodeCompletedCount);
END
DECLARE @LstDocumentId NVARCHAR(255);
DECLARE @LstId INT;
— Get the document by its ID
SELECT @LstDocumentId = Document_ID,@LstId=Id
FROM ImportPages
WHERE Id = @DocOrgId;
SET @Description=CONCAT(@Description,‘ / 1-Last Doc ID – ‘+@LstDocumentId+‘, ‘+‘Last
Id – ‘+@LstId);
IF @LstDocumentId IS NULL
BEGIN
— If document does not exist, check for the last document
SELECT @IsReturnFlag =
CASE
WHEN @UserTask = ‘Coder’ AND @TotalDocCount = @CodeCompletedCount THEN 1
WHEN @UserTask = ‘QA’ AND @TotalDocCount = @QaCompletedCount THEN 1
ELSE 0
END;
SET @Description=CONCAT(@Description,‘ / Is Return Flag – ‘+CAST(@IsReturnFlag AS
nvarchar(10)));
END
ELSE
BEGIN
IF @isHistoryDocument=0
BEGIN
/*Get Next Available Document*/
DECLARE @IsAssignedSameDoc BIT;
— Check if the document is assigned
SELECT @IsAssignedSameDoc = CASE WHEN EXISTS (SELECT 1 FROM CheckDocuments
WHERE Document_ID = @DocumentId) THEN 1 ELSE 0 END;
IF @IsAssignedSameDoc = 0
BEGIN
— If not assigned, set next document ID and ID to the current document
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,‘ / IsAssignedSameDoc – ‘+
Cast(@IsAssignedSameDoc AS NVARCHAR(10))+‘, NextDocumentId’+@NextDocumentId+‘,
NextId’+@NextId);
END
ELSE
BEGIN
— If assigned, find the next available document
DECLARE @AvailableDocumentId NVARCHAR(100);
DECLARE @AvailableId INT;
— Get the list of documents assigned to the same task
WITH AssignedDocs AS (
SELECT Document_ID
FROM CheckDocuments
WHERE UserTask = @UserTask
)
SELECT TOP 1 @AvailableDocumentId = Document_ID,@AvailableId=Id
FROM ImportPages
WHERE Document_ID NOT IN (SELECT Document_ID FROM AssignedDocs)
ORDER BY Document_ID;
IF @AvailableDocumentId IS NOT NULL
BEGIN
— If available document found, set its ID as next document ID
SET @NextDocumentId = @AvailableDocumentId;
SET @NextId = @AvailableId;
SET @Description=CONCAT(@Description,‘ / @AvailableDocumentId –
‘+ @AvailableDocumentId+‘, NextDocumentId’+@NextDocumentId+‘, NextId’+@NextId);
END
ELSE
BEGIN
— If not assigned, set next document ID and ID to the current
document
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,‘ / ELSE –
NextDocumentId’+@NextDocumentId+‘, NextId’+@NextId);
END
END
/*Remove Document from CheckDocument Table*/
— Check if the document exists in CheckDocuments table
IF EXISTS (
SELECT 1
FROM CheckDocuments
WHERE Document_ID = @DocumentID
AND ProjectId = @ProjectId
AND UserTask = @UserTask
AND DocumentStatus = 0
)
BEGIN
— Remove the document from CheckDocuments table
DELETE FROM CheckDocuments
WHERE Document_ID = @DocumentID
AND ProjectId = @ProjectId
AND UserTask = @UserTask
AND DocumentStatus = 0;
SET @Description=CONCAT(@Description,‘Removed Document from check
table’);
— Output informational message
PRINT ‘Removed Document from check table: ‘ + @DocumentID;
— Output informational message
PRINT ‘Check Table Data Removed: ‘ + @DocumentID;
END
END
ELSE
BEGIN
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,‘ / Outer ELSE –
NextDocumentId’+@NextDocumentId+‘, NextId’+@NextId);
END
END
PRINT ‘@nextId – ‘+ Cast(@nextId As VARCHAR(50));
SET @Description=CONCAT(@Description,‘ / Next button success’);
EXEC [dbo].[InsertSaveDocAndpartiesLogs] @DocumentID,@Description,0;
SET @ReturnJSONResult =
CASE
WHEN @isReturnFlag = 1 THEN N‘{“Response”: “LastDocument”, “Message”: “All
the documents are completed.”}’
WHEN @nextId > 0 THEN N‘{“Response”: “success”, “nextDocumentId”: “‘ +
@nextDocumentId + ‘”, “nextId”: ‘ + CAST(@nextId AS NVARCHAR(10)) + ‘, “Message”: “Document
updated successfully”}’
ELSE N‘{“Response”: “No documents to code”, “Statuscode”: 404}’
END;
— Output informational message
PRINT ‘JsonData: ‘ + @ReturnJSONResult;
SELECT @ReturnJSONResult As JsonResponse;
COMMIT TRANSACTION; PRINT ‘COMMIT’;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
BEGIN
IF @IsAddPartiesAndPartiesHistory = 0 OR @IsAddCodedData=0 OR
@IsUpdateImportPages=0
OR @IsUpdateAdminRegeTitle=0 OR @IsUpdateExportExtras=0 OR @
@IsInsUpdFieldAdminValidations=0
OR @IsExecCodingSessionDetails=0
ROLLBACK TRANSACTION MySavepoint; — Rollback to savepoint
ELSE
ROLLBACK TRANSACTION; — Rollback entire transaction
END
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=CONCAT(@Description,‘ / ‘+@ErrorMessage);
EXEC [dbo].[InsertSaveDocAndpartiesLogs] @DocumentID,@Description,0;
— Set jsonResponse based on error
IF @ErrorMessage = ‘Sequence contains more than one element’
BEGIN
SET @ReturnJSONResult = N‘{“Response”: “Document Saving fail”, “Statuscode”:
500}’;
END
ELSE
BEGIN
IF @ErrorState = 1205
BEGIN
SET @ReturnJSONResult = N‘{“Response”: “1205 – Deadlock Detected”,
“Statuscode”: 500}’;
END
ELSE BEGIN
SET @ReturnJSONResult = N‘{“Response”: “‘ + @ErrorMessage +
‘”,”AddPartiesAndPartiesHistor”:’+CAST(@IsAddPartiesAndPartiesHistory AS NVARCHAR(10))+‘”,
“AddCodedData”:’+CAST(@IsAddCodedData AS
NVARCHAR(10))+‘”,”UpdateImportPages”:’+CAST(@IsUpdateImportPages AS NVARCHAR(10))+‘”,
“UpdateAdminRegeTitle”:’+CAST(@IsUpdateAdminRegeTitle AS
NVARCHAR(10))+‘”,”UpdateExportExtras”:’+CAST(@IsUpdateExportExtras AS NVARCHAR(10))+‘”,
“InsUpdFieldAdminValidations”:’+CAST(@IsInsUpdFieldAdminValidations AS
NVARCHAR(10))+‘”,
“ExecCodingSessionDetails”:’+CAST(@IsExecCodingSessionDetails AS
NVARCHAR(10))+‘”,”Statuscode”: 500}’; END;
END
SELECT @ReturnJSONResult As JsonResponse;
RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;
END;
END;
Rest are nested procedures
ALTER PROCEDURE [dbo].[AddCodedData]
(
@DocumentID VARCHAR(50) = NULL,
@ProjectId VARCHAR(50) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Estimated NVARCHAR(255) = NULL,
@Title NVARCHAR(4000) = NULL,
@CodingQATime INT = NULL,
@IsCorrected INT = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
DECLARE @TempCodedDatas AS TABLE (
[Document_ID] NVARCHAR(255), [Image_File_Name] NVARCHAR(500),
[page_label] NVARCHAR(255), [page_num] INT, [num_pages] INT,
[Coded] INT, [Revision] INT, [DocType] NVARCHAR(255),
[EnteredBy] INT, [HostDocId] NVARCHAR(255),
[ExportDate] DATETIME, [ImportDate] DATETIME, [Percentage]
INT, [SetId] INT, [DateCreated] DATETIME,
LastModified DATETIME, Host_Reference NVARCHAR(255),
[Document_Date] DATETIME, [Estimated] NVARCHAR(255),
[Document_Type] NVARCHAR(255),Title NVARCHAR(4000),
Document_DateValue NVARCHAR(255),
[CodingDate] DATETIME, [CodingTime] INT,[QADate] DATETIME,
[QATime] INT, [IsCorrected] INT
);
INSERT inTO @TempCodedDatas ([Document_ID],
[Image_File_Name], [page_label], [page_num], [num_pages], [Coded], [Revision], [DocType],
[EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified], [Host_Reference], [Document_Date], [Estimated],
[Document_Type],
[Title], [Document_DateValue], [CodingDate],[QADate],
[CodingTime],[QATime], [IsCorrected])
SELECT IPS.[Document_ID],IPS.[Image_File_Name],IPS.
[page_label],IPS.[page_num],IPS.[num_pages],1 AS [Coded],
(CASE WHEN @UserTask=‘Coder’ THEN 0 ELSE 1 END) AS
[Revision],@DocumentType AS [DocType],@EnteredById AS [EnteredBy],
IPS.[HostDocId],IPS.[ExportDate],IPS.[ImportDate],IPS.
[Percentage],IPS.[SetId],IPS.[DateCreated],GETDATE() AS [LastModified],
ICD.[Host_Reference],
CASE
WHEN ISNULL(@DocumentDate, ”) = ” THEN NULL
ELSE CONVERT(DATETIME, @DocumentDate)
END AS [DocDate],
@Estimated AS [Estimated],
@DocumentType AS [Document_Type],
CASE
WHEN LEN(@Title) > 0 THEN @Title
ELSE ‘Untitled’
END AS [Title],
CASE
WHEN ISNULL(ICD.Document_DateValue, ”) = ” THEN
NULL
ELSE CONVERT(DATETIME, ICD.Document_DateValue)
END AS [Document_DateValue],
(CASE WHEN @UserTask=‘Coder’ THEN GETDATE() ELSE NULL
END) AS [CodingDate],
(CASE WHEN @UserTask=‘Coder’ THEN NULL ELSE GETDATE()
END) AS [QADate],
(CASE WHEN @UserTask=‘Coder’ THEN @CodingQATime ELSE 0
END) AS [CodingTime],
(CASE WHEN @UserTask=‘Coder’ THEN 0 ELSE @CodingQATime
END) AS [CodingTime],
@IsCorrected AS [IsCorrected]
FROM
ImportPages IPS
INNER JOIN
ImportCodedDatas ICD ON ICD.Document_ID =
IPS.Document_ID
WHERE
IPS.Document_ID = @DocumentID
IF @UserTask = ‘Coder’
BEGIN
INSERT INTO CodedDatas ([Document_ID], [Image_File_Name],
[page_label], [page_num], [num_pages], [Coded], [Revision],
[DocType], [EnteredBy], [HostDocId],[ExportDate],
[ImportDate], [Percentage], [SetId], [DateCreated], [LastModified],
[main_id],[End_Page],[No_Pages],[Host_Reference],
[Document_Date], [Estimated], [Document_Type], [Title],
[Document_DateValue], [CodingDate], [QADate],
[CodingTime], [QATime], [IsCorrected],[CodingStatus],[QAStatus])
SELECT [Document_ID], [Image_File_Name], [page_label],
[page_num], [num_pages],[Coded], [Revision], [DocType], [EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified],NULL,NULL,0, Host_Reference,
[Document_Date], [Estimated], [DocType],Title,
[Document_DateValue],[CodingDate], [QADate], [CodingTime], [QATime],
[IsCorrected],NULL,NULL
FROM @TempCodedDatas
SET @Description=‘Coded Data Saved’;
END
ELSE
BEGIN
INSERT INTO CodedDatas ([Document_ID], [Image_File_Name],
[page_label], [page_num], [num_pages], [Coded], [Revision],
[DocType], [EnteredBy], [HostDocId],[ExportDate],
[ImportDate], [Percentage], [SetId], [DateCreated], [LastModified],
[main_id],[End_Page],[No_Pages],[Host_Reference],
[Document_Date], [Estimated], [Document_Type], [Title],
[Document_DateValue], [CodingDate], [QADate],
[CodingTime], [QATime], [IsCorrected],[CodingStatus],[QAStatus])
SELECT [Document_ID], [Image_File_Name], [page_label],
[page_num], [num_pages],[Coded], [Revision], [DocType], [EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified],NULL,NULL,0, Host_Reference,
[Document_Date], [Estimated], [DocType],Title,
[Document_DateValue],[CodingDate], [QADate], [CodingTime], [QATime],
[IsCorrected],NULL,NULL
FROM @TempCodedDatas
SET @Description=‘Review data Saved’;
END;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateAdminRegeTitle]
(
@DocumentID VARCHAR(50) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Title NVARCHAR(4000) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@Estimated NVARCHAR(255) = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
/*Update Admin Regex*/
DECLARE @AdminRegexCount INT;
DECLARE @CodedDataCount INT;
— Get the count of enabled admin regexes
SELECT @AdminRegexCount = COUNT(Id)
FROM AdminRegexs
WHERE Enabled = 1;
— Get the latest coded data for the document
SELECT @CodedDataCount = COUNT(Id)
FROM CodedDatas –WITH (UPDLOCK, SERIALIZABLE)
WHERE Document_ID = @DocumentID;
— Fetch the latest coded data
DECLARE @LatestCodedData TABLE (
Id INT,
Title NVARCHAR(MAX)
);
INSERT INTO @LatestCodedData (Id, Title)
SELECT TOP 1 Id, Title
FROM CodedDatas –WITH (UPDLOCK, SERIALIZABLE)
WHERE Document_ID = @DocumentID
ORDER BY LastModified DESC;
Declare @CodedDataTitle NVarchar(255)
Set @CodedDataTitle=(SELECT Title FROM
@LatestCodedData)
— Apply common updates conditionally
IF (@CodedDataTitle) != ‘Untitled’
BEGIN
UPDATE ImportCodedDatas
SET Document_Date = @DocumentDate,
Estimated = @Estimated,
Document_Type = @DocumentType
WHERE Document_ID = @DocumentID;
END;
DECLARE @TitleString NVARCHAR(MAX);
DECLARE @Replacement NVARCHAR(MAX);
— Apply title regex replacements
IF @AdminRegexCount > 0 AND @CodedDataCount > 0
BEGIN
— Loop through admin regexes
DECLARE @Index INT = 1;
WHILE @Index <= @AdminRegexCount
BEGIN
— Get the current admin regex and
replacement
SELECT @TitleString = Matchexpression,
@Replacement = Replacement
FROM (
SELECT Matchexpression, Replacement,
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RowNum
FROM AdminRegexs
) AS AdminRegex
WHERE RowNum = @Index;
— Update titles based on admin regex
IF @TitleString = ‘(Proprietary
Limited)+/g’
AND NOT EXISTS (SELECT 1 FROM
@LatestCodedData WHERE Title LIKE ‘%(Proprietary Limited)+/g%’)
BEGIN
SET @TitleString =
REPLACE(REPLACE(REPLACE(@TitleString, ‘(‘, ”), ‘)’, ”), ‘+/g’, ”);
END
ELSE
BEGIN
SET @TitleString =
REPLACE(REPLACE(REPLACE(REPLACE(@TitleString, ‘[‘, ”), ‘]’, ”), ‘+/g’, ”), ‘\’, ”);
SET @Title = REPLACE(CASE WHEN
@TitleString LIKE ‘%[,.;:()!?]+/g%’
THEN
REPLACE(CAST(REPLACE(@Title, @TitleString, @Replacement) AS NVARCHAR(MAX)),
‘[‘ + @TitleString +
‘]’, @Replacement) ELSE @Title END, @TitleString, @Replacement);
END
SET @Title = REPLACE(
CASE
WHEN CHARINDEX(@TitleString,
@CodedDataTitle) > 0 THEN
CASE
WHEN @CodedDataTitle
COLLATE SQL_Latin1_General_CP1_CI_AI LIKE ‘%’ + @TitleString + ‘%’ THEN
REPLACE(@CodedDataTitle, @TitleString, @Replacement)
ELSE @CodedDataTitle
END
ELSE @Title
END
, @TitleString, @Replacement);
— Update titles in ImportCodedDatas
table
UPDATE ImportCodedDatas
SET Title = @Title;
— Update titles in CodedDatas table
UPDATE CodedDatas
SET Title = @Title;
— Increment index
SET @Index = @Index + 1;
END;
End;
Else
Begin
DECLARE @NewTitle NVARCHAR(MAX);
SET @NewTitle = REPLACE(LTRIM(RTRIM(@Title)),
‘ ‘, ”); — Remove leading and trailing spaces
UPDATE ImportCodedDatas
SET Title = CASE
WHEN LEN(@NewTitle) > 0 THEN
@Title
ELSE ‘Untitled’
END
WHERE Document_ID = @DocumentID;
End;
COMMIT TRANSACTION;
SET @Description=‘Update Regex Title’;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateExportExtras]
(
@DocumentID VARCHAR(50) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
DECLARE @MatchExpr NVARCHAR(MAX);
DECLARE @Replacement NVARCHAR(MAX);
DECLARE @UpdatedCount INT;
— Get Export_extras matching the Document_ID
SELECT
@MatchExpr = Matchexpression,
@Replacement = Replacement
FROM
AdminRegexs
WHERE
Enabled = 1;
— Update Export_extras for each AdminRegex
IF @MatchExpr IS NOT NULL AND @Replacement IS
NOT NULL
BEGIN
— Update theValue column
UPDATE Export_extras
SET theValue = REPLACE(theValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
theValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
PRINT CONCAT(‘Updated ‘, @UpdatedCount, ‘
Export_extras (theValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
— Update memoValue column
UPDATE Export_extras
SET memoValue = REPLACE(memoValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
memoValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
SET @Description=‘Updated’+
@UpdatedCount+ ‘ Export_extras (memoValue) for AdminRegex: ‘+ @MatchExpr;
PRINT CONCAT(‘Updated’, @UpdatedCount, ‘
Export_extras (memoValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
— Update textValue column
UPDATE Export_extras
SET textValue = REPLACE(textValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
textValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
SET @Description=‘Updated ‘+
@UpdatedCount+ ‘ Export_extras (textValue) for AdminRegex: ‘+ @MatchExpr ;
PRINT CONCAT(‘Updated ‘, @UpdatedCount, ‘
Export_extras (textValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
END;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateImportPages]
(
@DocumentID VARCHAR(50) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
/*Updating ImportPage*/
IF @UserTask = ‘Coder’
BEGIN
UPDATE ImportPages
SET Coded = 1,
EnteredBy = @EnteredById,
DocType = @DocumentType,
LastModified = GETDATE()
WHERE Document_ID = @DocumentID;
END
ELSE
BEGIN
UPDATE ImportPages
SET Revision = 1,
EnteredBy = @EnteredById,
LastModified = GETDATE()
WHERE Document_ID = @DocumentID;
END;
SET @Description=‘Updated Import Pages’;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
In my application multiple users request the API and in the API I am calling the stored Procedure via Entity Framework but some times getting exception Data Is Null. The Method or Property cannot be call on null values. This is not consistent if 4-5 users are clicking on submit button to call api then api executing stored procedure then for some users exception coming. This is my main procedure ALTER PROCEDURE [dbo].[sp_SaveDcumentAndParties](
@Action VARCHAR(20),
@partiesHistoryJson NVARCHAR(MAX)=NULL,
@exportExtraHistoryJson NVARCHAR(MAX)=NULL,
@FieldSettingsJson NVARCHAR(MAX)=NULL,
@CodingSessionDetails NVARCHAR(MAX)=NULL,
@DocumentID VARCHAR(50) = NULL,
@ProjectId VARCHAR(50) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Estimated NVARCHAR(255) = NULL,
@Title NVARCHAR(4000) = NULL,
@CodingQATime INT = NULL,
@IsCorrected INT = NULL,
@UserTask VARCHAR(10) = NULL,
@isHistoryDocument Bit,
@ReturnJSONResult NVARCHAR(MAX) =NULL
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
IF @Action = ‘Save’
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
BEGIN TRY
— Deadlock avoidance mechanism
SET DEADLOCK_PRIORITY LOW;
DECLARE @IsAddPartiesAndPartiesHistory BIT;
DECLARE @IsAddCodedData BIT;
DECLARE @IsUpdateImportPages BIT;
DECLARE @IsUpdateAdminRegeTitle BIT;
DECLARE @IsUpdateExportExtras BIT;
DECLARE @IsInsUpdFieldAdminValidations BIT;
DECLARE @IsExecCodingSessionDetails BIT;
SAVE TRANSACTION MySavepoint; — Savepoint before inner procedure call
EXEC AddPartiesAndPartiesHistory @partiesHistoryJson,@exportExtraHistoryJson,
@DocumentID,@IsReturn=@IsAddPartiesAndPartiesHistory OUTPUT;
EXEC AddCodedData @DocumentID,@ProjectId, @DocumentDate,@DocumentType,@EnteredById,
@Estimated,@Title,@CodingQATime,@IsCorrected,@UserTask,@IsReturn=@IsAddCodedData
OUTPUT;
EXEC UpdateImportPages @DocumentID,@DocumentType,
@EnteredById,@UserTask,@IsReturn=@IsUpdateImportPages OUTPUT;
EXEC UpdateAdminRegeTitle @DocumentID,@DocumentType,
@EnteredById,@Title,@DocumentDate,@Estimated,@UserTask,@IsReturn=@IsUpdateAdminRegeTitle
OUTPUT;
If @UserTask=’Coder’
Begin
EXEC UpdateExportExtras @DocumentID,@IsReturn=@IsUpdateExportExtras OUTPUT;
EXEC InsUpdFieldAdminValidations
@DocumentID,@FieldSettingsJson,@IsReturn=@IsInsUpdFieldAdminValidations OUTPUT;
END;
DECLARE @DocOrgID INT = (SELECT Id + 1 FROM ImportPages Where
Document_ID=@DocumentID);
DECLARE @NextDocumentId NVARCHAR(100);
DECLARE @NextId INT = 0;
DECLARE @IsReturnFlag BIT = 0;
DECLARE @CodeCompletedCount INT = 0;
DECLARE @QaCompletedCount INT = 0;
DECLARE @TotalDocCount INT = 0;
If @UserTask=’QA’
BEGIN
EXEC ExecCodingSessionDetails
@DocumentID,@CodingSessionDetails,@IsReturn=@IsExecCodingSessionDetails OUTPUT;
— Retrieve total document count
SELECT @TotalDocCount = COUNT(Id) FROM ImportPages;
SET @Description=’Total Document – ‘+@TotalDocCount;
— Retrieve QA completed document count
SELECT @QaCompletedCount = COUNT(Id) FROM ImportPages WHERE Coded = 1 AND
Revision = 1;
SET @Description=CONCAT(@Description,’ / Total QA Doc ompleted –
‘+@QaCompletedCount);
END
ELSE
BEGIN
— Retrieve total document count and code completed document count
SELECT @TotalDocCount = COUNT(Id), @CodeCompletedCount = SUM(CASE WHEN Coded = 1
THEN 1 ELSE 0 END)
FROM ImportPages;
SET @Description=CONCAT(@Description,’ / Total Coded Doc ompleted –
‘+@CodeCompletedCount);
END
DECLARE @LstDocumentId NVARCHAR(255);
DECLARE @LstId INT;
— Get the document by its ID
SELECT @LstDocumentId = Document_ID,@LstId=Id
FROM ImportPages
WHERE Id = @DocOrgId;
SET @Description=CONCAT(@Description,’ / 1-Last Doc ID – ‘+@LstDocumentId+’, ‘+’Last
Id – ‘+@LstId);
IF @LstDocumentId IS NULL
BEGIN
— If document does not exist, check for the last document
SELECT @IsReturnFlag =
CASE
WHEN @UserTask = ‘Coder’ AND @TotalDocCount = @CodeCompletedCount THEN 1
WHEN @UserTask = ‘QA’ AND @TotalDocCount = @QaCompletedCount THEN 1
ELSE 0
END;
SET @Description=CONCAT(@Description,’ / Is Return Flag – ‘+CAST(@IsReturnFlag AS
nvarchar(10)));
END
ELSE
BEGIN
IF @isHistoryDocument=0
BEGIN
/*Get Next Available Document*/
DECLARE @IsAssignedSameDoc BIT;
— Check if the document is assigned
SELECT @IsAssignedSameDoc = CASE WHEN EXISTS (SELECT 1 FROM CheckDocuments
WHERE Document_ID = @DocumentId) THEN 1 ELSE 0 END;
IF @IsAssignedSameDoc = 0
BEGIN
— If not assigned, set next document ID and ID to the current document
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,’ / IsAssignedSameDoc – ‘+
Cast(@IsAssignedSameDoc AS NVARCHAR(10))+’, NextDocumentId’+@NextDocumentId+’,
NextId’+@NextId);
END
ELSE
BEGIN
— If assigned, find the next available document
DECLARE @AvailableDocumentId NVARCHAR(100);
DECLARE @AvailableId INT;
— Get the list of documents assigned to the same task
WITH AssignedDocs AS (
SELECT Document_ID
FROM CheckDocuments
WHERE UserTask = @UserTask
)
SELECT TOP 1 @AvailableDocumentId = Document_ID,@AvailableId=Id
FROM ImportPages
WHERE Document_ID NOT IN (SELECT Document_ID FROM AssignedDocs)
ORDER BY Document_ID;
IF @AvailableDocumentId IS NOT NULL
BEGIN
— If available document found, set its ID as next document ID
SET @NextDocumentId = @AvailableDocumentId;
SET @NextId = @AvailableId;
SET @Description=CONCAT(@Description,’ / @AvailableDocumentId –
‘+ @AvailableDocumentId+’, NextDocumentId’+@NextDocumentId+’, NextId’+@NextId);
END
ELSE
BEGIN
— If not assigned, set next document ID and ID to the current
document
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,’ / ELSE –
NextDocumentId’+@NextDocumentId+’, NextId’+@NextId);
END
END
/*Remove Document from CheckDocument Table*/
— Check if the document exists in CheckDocuments table
IF EXISTS (
SELECT 1
FROM CheckDocuments
WHERE Document_ID = @DocumentID
AND ProjectId = @ProjectId
AND UserTask = @UserTask
AND DocumentStatus = 0
)
BEGIN
— Remove the document from CheckDocuments table
DELETE FROM CheckDocuments
WHERE Document_ID = @DocumentID
AND ProjectId = @ProjectId
AND UserTask = @UserTask
AND DocumentStatus = 0;
SET @Description=CONCAT(@Description,’Removed Document from check
table’);
— Output informational message
PRINT ‘Removed Document from check table: ‘ + @DocumentID;
— Output informational message
PRINT ‘Check Table Data Removed: ‘ + @DocumentID;
END
END
ELSE
BEGIN
SET @NextDocumentId = @LstDocumentId;
SET @NextId = @LstId;
SET @Description=CONCAT(@Description,’ / Outer ELSE –
NextDocumentId’+@NextDocumentId+’, NextId’+@NextId);
END
END
PRINT ‘@nextId – ‘+ Cast(@nextId As VARCHAR(50));
SET @Description=CONCAT(@Description,’ / Next button success’);
EXEC [dbo].[InsertSaveDocAndpartiesLogs] @DocumentID,@Description,0;
SET @ReturnJSONResult =
CASE
WHEN @isReturnFlag = 1 THEN N'{“Response”: “LastDocument”, “Message”: “All
the documents are completed.”}’
WHEN @nextId > 0 THEN N'{“Response”: “success”, “nextDocumentId”: “‘ +
@nextDocumentId + ‘”, “nextId”: ‘ + CAST(@nextId AS NVARCHAR(10)) + ‘, “Message”: “Document
updated successfully”}’
ELSE N'{“Response”: “No documents to code”, “Statuscode”: 404}’
END;
— Output informational message
PRINT ‘JsonData: ‘ + @ReturnJSONResult;
SELECT @ReturnJSONResult As JsonResponse;
COMMIT TRANSACTION; PRINT ‘COMMIT’;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
BEGIN
IF @IsAddPartiesAndPartiesHistory = 0 OR @IsAddCodedData=0 OR
@IsUpdateImportPages=0
OR @IsUpdateAdminRegeTitle=0 OR @IsUpdateExportExtras=0 OR @
@IsInsUpdFieldAdminValidations=0
OR @IsExecCodingSessionDetails=0
ROLLBACK TRANSACTION MySavepoint; — Rollback to savepoint
ELSE
ROLLBACK TRANSACTION; — Rollback entire transaction
END
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=CONCAT(@Description,’ / ‘+@ErrorMessage);
EXEC [dbo].[InsertSaveDocAndpartiesLogs] @DocumentID,@Description,0;
— Set jsonResponse based on error
IF @ErrorMessage = ‘Sequence contains more than one element’
BEGIN
SET @ReturnJSONResult = N'{“Response”: “Document Saving fail”, “Statuscode”:
500}’;
END
ELSE
BEGIN
IF @ErrorState = 1205
BEGIN
SET @ReturnJSONResult = N'{“Response”: “1205 – Deadlock Detected”,
“Statuscode”: 500}’;
END
ELSE BEGIN
SET @ReturnJSONResult = N'{“Response”: “‘ + @ErrorMessage +
‘”,”AddPartiesAndPartiesHistor”:’+CAST(@IsAddPartiesAndPartiesHistory AS NVARCHAR(10))+'”,
“AddCodedData”:’+CAST(@IsAddCodedData AS
NVARCHAR(10))+'”,”UpdateImportPages”:’+CAST(@IsUpdateImportPages AS NVARCHAR(10))+'”,
“UpdateAdminRegeTitle”:’+CAST(@IsUpdateAdminRegeTitle AS
NVARCHAR(10))+'”,”UpdateExportExtras”:’+CAST(@IsUpdateExportExtras AS NVARCHAR(10))+'”,
“InsUpdFieldAdminValidations”:’+CAST(@IsInsUpdFieldAdminValidations AS
NVARCHAR(10))+'”,
“ExecCodingSessionDetails”:’+CAST(@IsExecCodingSessionDetails AS
NVARCHAR(10))+'”,”Statuscode”: 500}’; END;
END
SELECT @ReturnJSONResult As JsonResponse;
RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;
END;
END;Rest are nested procedures ALTER PROCEDURE [dbo].[AddCodedData]
(
@DocumentID VARCHAR(50) = NULL,
@ProjectId VARCHAR(50) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Estimated NVARCHAR(255) = NULL,
@Title NVARCHAR(4000) = NULL,
@CodingQATime INT = NULL,
@IsCorrected INT = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
DECLARE @TempCodedDatas AS TABLE (
[Document_ID] NVARCHAR(255), [Image_File_Name] NVARCHAR(500),
[page_label] NVARCHAR(255), [page_num] INT, [num_pages] INT,
[Coded] INT, [Revision] INT, [DocType] NVARCHAR(255),
[EnteredBy] INT, [HostDocId] NVARCHAR(255),
[ExportDate] DATETIME, [ImportDate] DATETIME, [Percentage]
INT, [SetId] INT, [DateCreated] DATETIME,
LastModified DATETIME, Host_Reference NVARCHAR(255),
[Document_Date] DATETIME, [Estimated] NVARCHAR(255),
[Document_Type] NVARCHAR(255),Title NVARCHAR(4000),
Document_DateValue NVARCHAR(255),
[CodingDate] DATETIME, [CodingTime] INT,[QADate] DATETIME,
[QATime] INT, [IsCorrected] INT
);
INSERT inTO @TempCodedDatas ([Document_ID],
[Image_File_Name], [page_label], [page_num], [num_pages], [Coded], [Revision], [DocType],
[EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified], [Host_Reference], [Document_Date], [Estimated],
[Document_Type],
[Title], [Document_DateValue], [CodingDate],[QADate],
[CodingTime],[QATime], [IsCorrected])
SELECT IPS.[Document_ID],IPS.[Image_File_Name],IPS.
[page_label],IPS.[page_num],IPS.[num_pages],1 AS [Coded],
(CASE WHEN @UserTask=’Coder’ THEN 0 ELSE 1 END) AS
[Revision],@DocumentType AS [DocType],@EnteredById AS [EnteredBy],
IPS.[HostDocId],IPS.[ExportDate],IPS.[ImportDate],IPS.
[Percentage],IPS.[SetId],IPS.[DateCreated],GETDATE() AS [LastModified],
ICD.[Host_Reference],
CASE
WHEN ISNULL(@DocumentDate, ”) = ” THEN NULL
ELSE CONVERT(DATETIME, @DocumentDate)
END AS [DocDate],
@Estimated AS [Estimated],
@DocumentType AS [Document_Type],
CASE
WHEN LEN(@Title) > 0 THEN @Title
ELSE ‘Untitled’
END AS [Title],
CASE
WHEN ISNULL(ICD.Document_DateValue, ”) = ” THEN
NULL
ELSE CONVERT(DATETIME, ICD.Document_DateValue)
END AS [Document_DateValue],
(CASE WHEN @UserTask=’Coder’ THEN GETDATE() ELSE NULL
END) AS [CodingDate],
(CASE WHEN @UserTask=’Coder’ THEN NULL ELSE GETDATE()
END) AS [QADate],
(CASE WHEN @UserTask=’Coder’ THEN @CodingQATime ELSE 0
END) AS [CodingTime],
(CASE WHEN @UserTask=’Coder’ THEN 0 ELSE @CodingQATime
END) AS [CodingTime],
@IsCorrected AS [IsCorrected]
FROM
ImportPages IPS
INNER JOIN
ImportCodedDatas ICD ON ICD.Document_ID =
IPS.Document_ID
WHERE
IPS.Document_ID = @DocumentID
IF @UserTask = ‘Coder’
BEGIN
INSERT INTO CodedDatas ([Document_ID], [Image_File_Name],
[page_label], [page_num], [num_pages], [Coded], [Revision],
[DocType], [EnteredBy], [HostDocId],[ExportDate],
[ImportDate], [Percentage], [SetId], [DateCreated], [LastModified],
[main_id],[End_Page],[No_Pages],[Host_Reference],
[Document_Date], [Estimated], [Document_Type], [Title],
[Document_DateValue], [CodingDate], [QADate],
[CodingTime], [QATime], [IsCorrected],[CodingStatus],[QAStatus])
SELECT [Document_ID], [Image_File_Name], [page_label],
[page_num], [num_pages],[Coded], [Revision], [DocType], [EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified],NULL,NULL,0, Host_Reference,
[Document_Date], [Estimated], [DocType],Title,
[Document_DateValue],[CodingDate], [QADate], [CodingTime], [QATime],
[IsCorrected],NULL,NULL
FROM @TempCodedDatas
SET @Description=’Coded Data Saved’;
END
ELSE
BEGIN
INSERT INTO CodedDatas ([Document_ID], [Image_File_Name],
[page_label], [page_num], [num_pages], [Coded], [Revision],
[DocType], [EnteredBy], [HostDocId],[ExportDate],
[ImportDate], [Percentage], [SetId], [DateCreated], [LastModified],
[main_id],[End_Page],[No_Pages],[Host_Reference],
[Document_Date], [Estimated], [Document_Type], [Title],
[Document_DateValue], [CodingDate], [QADate],
[CodingTime], [QATime], [IsCorrected],[CodingStatus],[QAStatus])
SELECT [Document_ID], [Image_File_Name], [page_label],
[page_num], [num_pages],[Coded], [Revision], [DocType], [EnteredBy], [HostDocId],
[ExportDate], [ImportDate], [Percentage], [SetId],
[DateCreated], [LastModified],NULL,NULL,0, Host_Reference,
[Document_Date], [Estimated], [DocType],Title,
[Document_DateValue],[CodingDate], [QADate], [CodingTime], [QATime],
[IsCorrected],NULL,NULL
FROM @TempCodedDatas
SET @Description=’Review data Saved’;
END;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateAdminRegeTitle]
(
@DocumentID VARCHAR(50) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@Title NVARCHAR(4000) = NULL,
@DocumentDate NVARCHAR(255) = NULL,
@Estimated NVARCHAR(255) = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
/*Update Admin Regex*/
DECLARE @AdminRegexCount INT;
DECLARE @CodedDataCount INT;
— Get the count of enabled admin regexes
SELECT @AdminRegexCount = COUNT(Id)
FROM AdminRegexs
WHERE Enabled = 1;
— Get the latest coded data for the document
SELECT @CodedDataCount = COUNT(Id)
FROM CodedDatas –WITH (UPDLOCK, SERIALIZABLE)
WHERE Document_ID = @DocumentID;
— Fetch the latest coded data
DECLARE @LatestCodedData TABLE (
Id INT,
Title NVARCHAR(MAX)
);
INSERT INTO @LatestCodedData (Id, Title)
SELECT TOP 1 Id, Title
FROM CodedDatas –WITH (UPDLOCK, SERIALIZABLE)
WHERE Document_ID = @DocumentID
ORDER BY LastModified DESC;
Declare @CodedDataTitle NVarchar(255)
Set @CodedDataTitle=(SELECT Title FROM
@LatestCodedData)
— Apply common updates conditionally
IF (@CodedDataTitle) != ‘Untitled’
BEGIN
UPDATE ImportCodedDatas
SET Document_Date = @DocumentDate,
Estimated = @Estimated,
Document_Type = @DocumentType
WHERE Document_ID = @DocumentID;
END;
DECLARE @TitleString NVARCHAR(MAX);
DECLARE @Replacement NVARCHAR(MAX);
— Apply title regex replacements
IF @AdminRegexCount > 0 AND @CodedDataCount > 0
BEGIN
— Loop through admin regexes
DECLARE @Index INT = 1;
WHILE @Index <= @AdminRegexCount
BEGIN
— Get the current admin regex and
replacement
SELECT @TitleString = Matchexpression,
@Replacement = Replacement
FROM (
SELECT Matchexpression, Replacement,
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RowNum
FROM AdminRegexs
) AS AdminRegex
WHERE RowNum = @Index;
— Update titles based on admin regex
IF @TitleString = ‘(Proprietary
Limited)+/g’
AND NOT EXISTS (SELECT 1 FROM
@LatestCodedData WHERE Title LIKE ‘%(Proprietary Limited)+/g%’)
BEGIN
SET @TitleString =
REPLACE(REPLACE(REPLACE(@TitleString, ‘(‘, ”), ‘)’, ”), ‘+/g’, ”);
END
ELSE
BEGIN
SET @TitleString =
REPLACE(REPLACE(REPLACE(REPLACE(@TitleString, ‘[‘, ”), ‘]’, ”), ‘+/g’, ”), ‘\’, ”);
SET @Title = REPLACE(CASE WHEN
@TitleString LIKE ‘%[,.;:()!?]+/g%’
THEN
REPLACE(CAST(REPLACE(@Title, @TitleString, @Replacement) AS NVARCHAR(MAX)),
‘[‘ + @TitleString +
‘]’, @Replacement) ELSE @Title END, @TitleString, @Replacement);
END
SET @Title = REPLACE(
CASE
WHEN CHARINDEX(@TitleString,
@CodedDataTitle) > 0 THEN
CASE
WHEN @CodedDataTitle
COLLATE SQL_Latin1_General_CP1_CI_AI LIKE ‘%’ + @TitleString + ‘%’ THEN
REPLACE(@CodedDataTitle, @TitleString, @Replacement)
ELSE @CodedDataTitle
END
ELSE @Title
END
, @TitleString, @Replacement);
— Update titles in ImportCodedDatas
table
UPDATE ImportCodedDatas
SET Title = @Title;
— Update titles in CodedDatas table
UPDATE CodedDatas
SET Title = @Title;
— Increment index
SET @Index = @Index + 1;
END;
End;
Else
Begin
DECLARE @NewTitle NVARCHAR(MAX);
SET @NewTitle = REPLACE(LTRIM(RTRIM(@Title)),
‘ ‘, ”); — Remove leading and trailing spaces
UPDATE ImportCodedDatas
SET Title = CASE
WHEN LEN(@NewTitle) > 0 THEN
@Title
ELSE ‘Untitled’
END
WHERE Document_ID = @DocumentID;
End;
COMMIT TRANSACTION;
SET @Description=’Update Regex Title’;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateExportExtras]
(
@DocumentID VARCHAR(50) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
DECLARE @MatchExpr NVARCHAR(MAX);
DECLARE @Replacement NVARCHAR(MAX);
DECLARE @UpdatedCount INT;
— Get Export_extras matching the Document_ID
SELECT
@MatchExpr = Matchexpression,
@Replacement = Replacement
FROM
AdminRegexs
WHERE
Enabled = 1;
— Update Export_extras for each AdminRegex
IF @MatchExpr IS NOT NULL AND @Replacement IS
NOT NULL
BEGIN
— Update theValue column
UPDATE Export_extras
SET theValue = REPLACE(theValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
theValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
PRINT CONCAT(‘Updated ‘, @UpdatedCount, ‘
Export_extras (theValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
— Update memoValue column
UPDATE Export_extras
SET memoValue = REPLACE(memoValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
memoValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
SET @Description=’Updated’+
@UpdatedCount+ ‘ Export_extras (memoValue) for AdminRegex: ‘+ @MatchExpr;
PRINT CONCAT(‘Updated’, @UpdatedCount, ‘
Export_extras (memoValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
— Update textValue column
UPDATE Export_extras
SET textValue = REPLACE(textValue,
@MatchExpr, @Replacement)
WHERE Document_ID = @DocumentID AND
textValue IS NOT NULL;
SET @UpdatedCount = @@ROWCOUNT;
SET @Description=’Updated ‘+
@UpdatedCount+ ‘ Export_extras (textValue) for AdminRegex: ‘+ @MatchExpr ;
PRINT CONCAT(‘Updated ‘, @UpdatedCount, ‘
Export_extras (textValue) for AdminRegex: ‘, @MatchExpr, ‘. Document ID: ‘, @DocumentID);
END;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END
ALTER PROCEDURE [dbo].[UpdateImportPages]
(
@DocumentID VARCHAR(50) = NULL,
@DocumentType NVARCHAR(255) = NULL,
@EnteredById INT = NULL,
@UserTask VARCHAR(10) = NULL,
@IsReturn BIT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ErrorMessage NVARCHAR(1000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
Declare @Description NVARCHAR(MAX);
BEGIN TRANSACTION;
BEGIN TRY
SET @IsReturn=0;
/*Updating ImportPage*/
IF @UserTask = ‘Coder’
BEGIN
UPDATE ImportPages
SET Coded = 1,
EnteredBy = @EnteredById,
DocType = @DocumentType,
LastModified = GETDATE()
WHERE Document_ID = @DocumentID;
END
ELSE
BEGIN
UPDATE ImportPages
SET Revision = 1,
EnteredBy = @EnteredById,
LastModified = GETDATE()
WHERE Document_ID = @DocumentID;
END;
SET @Description=’Updated Import Pages’;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,1;
COMMIT TRANSACTION;
SET @IsReturn=1;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
— Get error details
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SET @Description=@ErrorMessage;
EXEC [dbo].[InsertSaveDocAndpartiesLogs]
@DocumentID,@Description,0;
SET @IsReturn=0;
END CATCH;
END Read More
Business Applications Partner News: Week of April 29
Check out this week’s top resources to stay up-to-date on the latest Business Applications Partner News. Remember to sign up for the monthly Dynamics 365 and Power Platform partner pulse newsletters.
What to register for:
THIS WEEK!
April 30 (PDT): Dynamics 365 Supply Chain Management – Demand Planning Workshop
May 2: Supply Chain Premium partner webinar
July 22: Microsoft Partner FY25 GTM Launch Event for Business Applications
What to review/like/share:
Case study: Power Platform influencer MNP Digital helps customers achieve low-code success
Leapwork brings testing efficiencies to Dynamics 365 and Power Platform partners: LinkedIn post | direct blog link
What to watch:
April 18 Tech Talk: Power Apps and SQL
Reminders: Register for the upcoming partner events!
Events:
May 9: SMB Partner Hour
May 16-17: Directions Asia
May 21-23: Microsoft Build
May 30: Copilot Studio Partner Opportunity Webinar
Register for upcoming Microsoft AI Training Roadshows
Trainings:
April 1-May 31: Q4 Microsoft Catalyst Partner Training
FY24 High Volume Acceleration Program (Advanced)
June 11 (IST) (BST): Dynamics 365 Supply Chain Management – Demand Planning Workshop
May 14-16 (PDT) | May 15-17 (IST) (BST): SMB Sales Bootcamp
Check out this week’s top resources to stay up-to-date on the latest Business Applications Partner News. Remember to sign up for the monthly Dynamics 365 and Power Platform partner pulse newsletters.
What to register for:
THIS WEEK!
April 30 (PDT): Dynamics 365 Supply Chain Management – Demand Planning Workshop
May 2: Supply Chain Premium partner webinar
July 22: Microsoft Partner FY25 GTM Launch Event for Business Applications
What to review/like/share:
Case study: Power Platform influencer MNP Digital helps customers achieve low-code success
Leapwork brings testing efficiencies to Dynamics 365 and Power Platform partners: LinkedIn post | direct blog link
What to watch:
April 18 Tech Talk: Power Apps and SQL
Reminders: Register for the upcoming partner events!
Events:
May 9: SMB Partner Hour
May 16-17: Directions Asia
May 21-23: Microsoft Build
May 30: Copilot Studio Partner Opportunity Webinar
Register for upcoming Microsoft AI Training Roadshows
Trainings:
April 1-May 31: Q4 Microsoft Catalyst Partner Training
FY24 High Volume Acceleration Program (Advanced)
June 11 (IST) (BST): Dynamics 365 Supply Chain Management – Demand Planning Workshop
May 14-16 (PDT) | May 15-17 (IST) (BST): SMB Sales Bootcamp Read More
See attached
I am not able to post this, maybe it will post eventually. It keeps asking me to correct highlited errors but cant find any
I am not able to post this, maybe it will post eventually. It keeps asking me to correct highlited errors but cant find any Read More
Removal of Duplicate Task option in ToDo
I have a number of template tasks which include a number of steps in ToDo which I do frequently but not on a regular schedule eg approving someone’s annual leave. I have a list of templates and when I need a new copy I used to duplicate the template task then move the second copy to where I needed it.
In the latest update Microsoft have removed the Duplicate function so I have to manually type the “duplicate” task in again. This is the same in Outlook app and online versions of Outlook and ToDo.
Why are MS taking core functionality away from this app? Do they ever consult users before making changes like this?
Does anyone have a workaround? Please …..
I have a number of template tasks which include a number of steps in ToDo which I do frequently but not on a regular schedule eg approving someone’s annual leave. I have a list of templates and when I need a new copy I used to duplicate the template task then move the second copy to where I needed it. In the latest update Microsoft have removed the Duplicate function so I have to manually type the “duplicate” task in again. This is the same in Outlook app and online versions of Outlook and ToDo.Why are MS taking core functionality away from this app? Do they ever consult users before making changes like this?Does anyone have a workaround? Please ….. Read More
MS Office Outlook rules
I get a lot of spam. most is 1x-3x and then the email domain ‘evaporates.’
but some just continue on for weeks/months/years.
so I set up a “rule” – if sender’s address include (text); permanently delete.
seems the “Rules” function will only accept five
text1 or
text2 or
…
entries.
so I established a second Rule – to add additional permanent delete spammer url.
problem:
there is a check block “and stop processing more rules” – which I uncheck and it auto rechecks.
I initially thought it meant ‘stop processing more rules on-that-message’
but apparently it means ‘stop processing any more rules of any kind’
so what happens is….
“rule #1” encounters an address to permanently delete in my inbox
and does not process “rule #2” – so I’m left with all the spam-to-be-permanently deleted contained in the five spammer url’s of rule #2
how does one tell MS Office Outlook to ‘keep on chugging’ and process all rules listed?
I get a lot of spam. most is 1x-3x and then the email domain ‘evaporates.’but some just continue on for weeks/months/years. so I set up a “rule” – if sender’s address include (text); permanently delete. seems the “Rules” function will only accept fivetext1 ortext2 or…entries. so I established a second Rule – to add additional permanent delete spammer url.problem:there is a check block “and stop processing more rules” – which I uncheck and it auto rechecks.I initially thought it meant ‘stop processing more rules on-that-message’but apparently it means ‘stop processing any more rules of any kind’so what happens is….”rule #1″ encounters an address to permanently delete in my inboxand does not process “rule #2” – so I’m left with all the spam-to-be-permanently deleted contained in the five spammer url’s of rule #2 how does one tell MS Office Outlook to ‘keep on chugging’ and process all rules listed? Read More
Partner Spotlight: Revolutionizing Agriculture Through Digital Innovation and Sustainability
As part of the Microsoft #BuildFor2030 Initiative, which aligns with the United Nations Sustainable Development Goals, we are committed to showcasing solutions that drive meaningful societal impact and spotlighting our partners’ growth stories on the marketplace. In this edition of our Partner Spotlight series, we continue highlighting partners at the forefront of app innovation on the commercial marketplace. Throughout the series, we will be telling the unique stories of partners who are leading the way with AI in app development, who are building using multiple Microsoft products, and who are publishing transactable applications on the marketplace. In this article, Microsoft’s @Gena_Goh at down with AGRIVI’s Matija Zulj to learn more about their story and partner journey.
About Matija: Matija is a founder and CEO at AGRIVI, an AgTech company that delivers digital agriculture solutions across the agrifood value chain, with the vision to change the way food is produced. With a focus on AI, innovation, sustainability, and support for farmers, AGRIVI plays a key role in improving the entire agrifood ecosystem. Matija serves as a European Innovation Council Ambassador, OECD-FAO Advisory Group Member for Responsible Ag Supply Chains, and UN Global Compact Board Member Croatia.
About Gena: Gena Goh is the Partner Advisory & Inclusive Growth Senior Strategy Lead with the Global Partner Solutions business at Microsoft. She focuses on developing strategies that elevate the voice of partner and enable partners to innovate and build solutions toward a more inclusive economy, including leading the Microsoft #BuildFor2030 Initiative. Gena is also a professionally trained co-active coach and enjoys helping people and organizations chart new paths to purpose and impact.
___________________________________________________________________________________________________________________________________
[GG]: Tell us about Agrivi and your mission. What inspired the founding?
[MZ]: AGRIVI was born with a powerful mission – to solve the global food problem by changing the way food is produced. It all began by seeing and witnessing that our current food ecosystem is broken, due to climate change, inefficient supply chains, and unsafe food.
Namely, with the global population projected to reach 10 billion by 2050, urgent changes are needed to ensure a future with healthy, nutritious, and climate-friendly food while securing livelihoods for farmers worldwide. Thus, embracing digital transformation becomes imperative to revolutionize the entire industry.
At AGRIVI, we are dedicated to driving this transformation by focusing on technology, innovation, and sustainability in improving the agrifood sector. Our mission is to empower farmers and agrifood companies with intelligent technology solutions, facilitating economically and environmentally sustainable and traceable food production.
We achieve this by digitalizing farm businesses, enhancing food transparency and safety, promoting regenerative agriculture practices, and offering farmers AI-powered advice and knowledge. We prioritize both digitalizing farming methods and fostering collaboration across the agrifood value chain.
[GG]: Can you tell us a bit about the application(s) you have available on the marketplace? How does it work? [You can also include a hyperlink to the application on the marketplace so that we can link to it in the blog]
[MZ]: Our comprehensive farm management software (FMS) leveraged through Microsoft Azure technology, equips farms worldwide with data-driven tools and real-time insights from the field to make precise agronomic and economic decisions and maximize productivity, profitability and sustainability.
With one centralized platform, farmers can manage their farm and have full control over all processes throughout the season. From creating a crop plan and budget for the season, minimizing risks and crop damage based on insights from the field such as weather conditions, pest alarms, or vegetation indices, monitoring the execution of work in the field, monitoring costs, and ensuring the complete traceability of crop production.
Insights based on data and actual field conditions enable farmers and companies to make informed choices about planting, fertilizing, and protection of crops, irrigation, and other parts of the production cycle. Data-driven farm management increases production efficiency and product quality and reduces resource consumption and food waste caused by quality and safety issues, which overall contributes to the preservation of the environment and the reduction of the negative effects of agriculture on the environment.
[GG]: AI is top of mind for all of us. How is AGRIVI leveraging or planning to leverage AI in app development?
[MZ]: Despite many agtech solutions being available, we are witnessing that technology adoption remained limited mainly to professional farmers and enterprises. Technology in agriculture was not simple and intuitive enough to reach all farmers leaving around 500 million farms still not digitalized.
Recognizing this gap, last year we launched AGRIVI Ed, an Agronomic AI advisor designed to democratize access to technology and revolutionize how farmers access knowledge and make decisions. AGRIVI Ed acts as an expert agronomic advisor available 24/7, supporting over 50 languages and accessible through messaging platforms like WhatsApp and Viber to make it intuitive and seamless for farmers to use, regardless of their technological knowledge.
Thanks to Azure OpenAI service and Copilot that uses Generative AI and different domain AI services we have trained our AI advisor to deliver high quality and extensive agronomic knowledge and drive seamless communication with the end-users. To expand our impact further, we’ve introduced this solution to the market as AGRIVI Engage, a fully managed B2B whitelabeled platform for agrifood companies and the public sector. This collaborative approach is enabling us to accelerate the adoption of technology and drive positive change globally.
Personalized AI Advisor enables companies and organizations to provide farmers with constant access to information, knowledge, and advice. It fosters active collaboration and upskilling farmers toward digital transformation, while helping companies to increase their market share, engage with their customers and drive decarbonization in their supply chains.
[GG]: What Microsoft cloud products did you use in your app development?
[MZ]: We have developed our products with a comprehensive technology stack, including Microsoft Azure Cloud, Microsoft.NET, Azure OpenAI Services, CoPilot and many others.
[GG]: How has Microsoft supported you along your journey?
[MZ]: AGRIVI has participated in Microsoft Founders Hub program and Entrepreneurship for Positive Impact Program. Through these programs, we had access to a substantial set of complimentary Microsoft products, technical support and go-to-market services, and coaching by senior leaders at Microsoft. For a fast-growing company, such support means a lot.
[GG]: Agrivi is a part of Microsoft’s Entrepreneurship for Positive Impact, supporting the UN Sustainable Development Goals (SDGs). How does being a part of this program align with the values and mission of your company?
[MZ]: Our solution is not just in bringing a simple farming app to growers, we aim to develop cutting-edge technology solutions and leverage the power of AI to support farmers be more efficient and profitable, to reduce waste, and to lower barriers to entry for a new generation of farmers.
Doing positive impact requires partnerships, which is why getting the opportunity to participate in Microsoft’s Entrepreneurship for Positive Impact program enabled us with high level support to navigate through our strategic and go-to-market challenges. The knowledge and mentorship, as well as visibility through the program truly helps us to scale our impact to the next level.
[GG]: Can you share examples of your work with customers that has enabled positive business and sustainability outcomes?
[MZ]: We recently launched AI agronomic advisors in collaboration with local municipalities in Croatia, which have decided to adopt this innovative technology recognizing its importance for local agriculture. These local municipalities have introduced their own personalized AI communication platform to farmers across Croatia.
Through the WhatsApp platform, the advisor provides seamless access to high-quality knowledge, information, and advice for farmers. They cover a wide spectrum of agronomic topics, from general agricultural inquiries to pest control, disease prevention, plant nutrition, fertilization, and regenerative practices. Additionally, they facilitate active collaboration between local municipalities and farmers. They give farmers easy access to information about tenders or subsidies and allow them to provide feedback on local policies, making them more involved in strategic plans and community matters.
As local governments worldwide grapple with challenges in maintaining sustainable agricultural production, farmers face disruptions due to climate change, labor shortages, and limited access to knowledge and advice. Supporting farmers to sustain and secure local food production has never been more critical.
That’s why by providing farmers with information and advice and enabling direct engagement with local authorities, the local municipalities, as the main providers of this innovative tool, have the opportunity to receive regular feedback from their local farmers which helps preserve local crop production and encourages sustainability within the community.
[GG]: How do you engage and educate your customers about sustainability, and what are some ways you encourage them to make solution investments that drive sustainability?
[MZ]: After 10 years of business in agtech industry, we are still facing the biggest challenge – changing the habit of making decisions based on intuition towards data-driven and fact-based decision-making. Transformation takes time, and on top of building products that solve real-life problems, it requires education in the industry, new skillsets, new regulatory and legislation systems, collaboration of the entire value chain, patient capital and much more.
That’s why we see our role as a technology provider but also as an educator. We are very active in raising awareness and educating our customers across the entire agrifood value chain about how our solutions help them transform traditional agriculture practices to more data-driven and climate-friendly, and how to leverage data and technology to empower sustainable practices and calculate their contribution to this matter.
Global food production is in a very bad state and there is a need for urgent change. Luckily regulatory requirements for making the food sector more productive and sustainable will soon require companies to change, and our role here is to help them navigate the change and embrace the opportunity that is in front of them.
That’s why we actively participate in global initiatives serving as the UN Global Compact Board member in Croatia, OECD-FAO Advisory Group member for Responsible Agriculture Supply Chains, and ambassadors at EIC. We actively encourage, improve, and advocate for digital transformation of the entire agri-food industry.
[GG]: Have you collaborated with other partners or community organizations in the Microsoft ecosystem? How has this collaboration been beneficial?
[MZ]: We are proud and grateful that we had the opportunity to present AGRIVI’s AI solutions during the World Economic Forum in Davos in collaboration with Microsoft, Schwab Foundation and SAP. Also, we had the opportunity to partner with diverse divisions within Microsoft, including Microsoft Research team that has supported the development of our AI solutions during the Hackathon for positive impact and we have jointly showcased it during the World AgriTech Innovation Summit in San Francisco.
[GG]: What are you most proud of in your journey building/leading Agrivi? What’s next?
[MZ]: For me, everything is about people. The great, passionate team of experts that stand behind AGRIVI is the thing I am most proud of. Their dedication, passion, and tireless efforts are the driving force behind AGRIVI’s success, transforming my vision from 10 years ago into a truly great impact we are making today.
With more experience and expertise from the industry, and thanks to Microsoft’s mentorship, we have been able to accelerate our impact. So, we are remaining committed to pushing the boundaries of innovation and expanding our reach to even more farmers globally. Our vision is to create a future where agriculture is able to feed the entire global population in a productive and sustainable way, and we believe that by harnessing the power of technology, we can make this vision a reality.
Microsoft Tech Community – Latest Blogs –Read More
Demystifying Azure Open AI for App developers
Co-Authors (Prakash, Prabhjot)
The purpose of this blog is to cover the concepts related to Azure Open AI in an easy-to-understand concise format for anyone with no or limited ML background.
Let’s begin by understanding the fundamental components of Azure OpenAI solutions, their tools, and patterns, and explore how they are distinct from Azure OpenAI itself.
Open AI is an independent research organization focused on artificial intelligence (AI) which in addition to research also develops various GPT (Generative pre-trained) models like GPT-4, GPT-4V, DALL-E 3, Whisper. Primary uses case for GPT models natural language processing tasks, language translation, text summarization, and Q&A. These models can be used with Enterprise data and additional domain specific models using various patterns and techniques.
Azure Open AI refers to the collaboration between OpenAI and Microsoft Azure. Under this partnership, OpenAI’s AI models and technologies are hosted by Microsoft in Azure, making them accessible to developers and organizations through the Azure platform. The Azure OpenAI service automatically encrypts any data that persists in the cloud, including training data and fine-tuned models. This encryption helps protect the data and ensures that it meets organizational security and compliance requirements. Although Azure OpenAI is designed to meet data protection, privacy, and security standards, it is the your responsibility to use the technology in compliance with applicable laws and regulations and in a manner that aligns with their specific business needs.
Azure Open AI Benefits
Pricing: Azure OpenAI and OpenAI have different pricing policies.
Regional availability: Azure OpenAI is available in multiple regions.
Tokens: Azure OpenAI limits the number of tokens
Data Safety: Data submitted to the Azure OpenAI Service including prompts (Inputs), completions (Outputs), embeddings and any training data remains within Microsoft Azure Customer Subscription and is not used by Azure OpenAI or passed to OpenAI for model improvements or training / predictions. No data is shared between customers.
Capabilities: Azure OpenAI provides a safe and reliable ecosystem to safeguard your data, while OpenAI provides advanced language AI models like OpenAI GPT or DALL-E.
Data Retention: Microsoft retains only the Abuse monitoring data for 30 days. Customers can request to opt out of the process.
Responsible AI: Azure Open AI goes through an RAI ensemble of AI models to filter Inputs and outputs for Sex, Hate, Violence, Self-Harm. These filters are configurable by customers.
Components of Azure open AI
Azure OpenAI offers a ready-to-use service with finely tuned capabilities, accessible via an API (Model as a Service). The key assets contributing to the Generative AI solution include LLMs, agents, plugins, prompts, chains, and APIs.
Fundamentals of Utilizing Azure OpenAI:
Prompting: The models operate on a prompt-based system. Interaction with the Model/API is conducted through prompts, and crafting an effective prompt is crucial, known as prompt engineering, to enhance relevance and precision.
Grounding: This technique provides the Model with context to yield more pertinent responses. Grounding can be achieved through various methods, such as embedding, to provide the necessary background information.
Chunking: This process divides extensive documents into smaller segments manageable by embedding models, ensuring adherence to maximum token input limits. These segments populate vector stores and facilitate text-to-vector query transformations.
Fine-Tuning: In instances where prompt engineering does not yield accurate responses or domain-specific behavior is required, fine-tuning re-trains the LLM with sample data to optimize it for datasets.
Tokens: Tokenization is the process of breaking text into smaller segments called tokens which can be a word or part of a wordfor few characters. Its usage and size depend on the model you are using.
Prompts in Azure Open AI
We query Azure Open AI using Prompts (fig 1). Prompt has three core components.
System/meta prompt
Question/Query
Sources/Context
Semantic Kernel and LangChain
LangChain and Semantic Kernel have some similarities, but each one has their unique features and use cases.
LangChain
LangChain is modular and supports both Python and JavaScript/TypeScript. It streamlines development by breaking down complex tasks into a sequence of components. LangChain offers a versatile framework for developing applications that involve natural language processing (NLP) tasks. Its modular nature and support for both Python and JavaScript/TypeScript indicate flexibility in development environments. Breaking down complex tasks into manageable components like Model I/O, Retrieval, Chains, Agents, Memory, and Response simplifies the development process and allows for easier debugging and maintenance.
The use of Chains to construct sequences of calls suggests a workflow-oriented approach, where developers can organize tasks into a logical sequence. Agents add another layer of abstraction by enabling chains to choose tools based on high-level directives, potentially increasing adaptability and efficiency.
The inclusion of Memory for persisting application state between runs of a chain indicates support for stateful processing, which can be crucial for certain types of applications where context needs to be maintained across interactions.
Overall, LangChain appears to be a good tool for building applications that involve NLP tasks, offering modularity, flexibility, and support for different programming languages. Its components provide developers with a structured approach to developing complex applications while streamlining the development process.
Sematic Kernel
Semantic Kernel is an open-source SDK (software development kits) that simplifies the process of constructing agents that can activate your existing code. It is a highly adaptable SDK that is compatible with models from OpenAI, Azure OpenAI, Hugging Face, and beyond. By merging your existing C#, Python, and Java code with these models, you can create agents that are proficient in responding to questions and automating tasks.
Empowering Developers with Semantic Kernel:
To assist developers in crafting their own Copilot experiences using AI plugins, we have unveiled Semantic Kernel, a streamlined open-source SDK that orchestrates your existing code (plugins) with AI.
Harness the same AI orchestration techniques that drive Microsoft’s Copilots in your applications.
Beyond Simple Chat Applications: While modern AI models are adept at generating messages and images, constructing fully autonomous AI agents that can automate business operations and enhance user productivity requires more. A framework that can interpret model responses and utilize them to trigger existing code is essential for productive tasks.
Semantic Kernel fulfills this need by providing an SDK that enables you to describe your existing code to AI models, allowing them to request its execution. Semantic Kernel then converts the model’s response into an actionable call to your code.
To summarize, LangChain is a powerful framework that has more out of the box tools and integrations whereas Semantic Kernel is more lightweight. Both frameworks have a wide range of use cases, making them versatile tools for developers. Whether you choose Langchain or Semantic Kernel will depend on the language your team supports and what features and integrations are included out of the box.
Samples: semantic-kernel/dotnet/samples at main · microsoft/semantic-kernel (github.com)
Vector DB
Vectors and Embeddings
Vector representation is to capture the essential characteristics of an item in a numerical format. Embedding is a special type of vector of data representation that LLMs can use.
A vector database is a storage system engineered to house and handle vector embeddings, which are numerical representations of complex data within a multi-dimensional space. Each dimension in this space is associated with a particular attribute of the data, and sophisticated data can be represented using tens of thousands of dimensions. The position of a vector within this space signifies its distinct characteristics. Various types of data, including words, phrases, documents, images, and audio, can be converted into vector form. These embeddings are crucial for functions such as similarity searches, multi-modal searches, recommendation systems, and large language models (LLMs), among others.
In a vector database, embeddings are indexed and queried through vector search algorithms based on their vector distance or similarity.
The following are some of the Vector Databases:
Cosmos DB
AI Search: Azure AI Search stores the data that you query over. Use it as a pure vector store anytime you need long-term memory or a knowledge base, or grounding data for Retrieval Augmented Generation (RAG) architecture, or any app that uses vectors.
PostgreSQL with vector extension
Pinecone
Any open source
Retrieval Augmented Generation (RAG)
Retrieval augmented generation (RAG) is an essential element of utilizing Generative AI, particularly in enterprise contexts. This approach involves acquiring domain-specific knowledge and integrating it with the initial prompt (refer to figure 2) to enhance the precision and relevance of the results produced by Azure Open AI. The ‘Bring Your Own Data’ feature is a unique capability that facilitates the implementation of RAG, and Azure AI studio simplifies its application for straightforward scenarios.
Responsible AI (RAI)
Built in features in Azure Open AI studio: Azure Open AI goes through an RAI ensemble of AI models to filter Inputs and outputs for Sex, Hate, Violence, Self-Harm. These filters are configurable by customers.
RAI Toolbox Github repository
Tools: The tools which can be used to develop Azure Open AI based solutions
AI Studio
CoPilot Studio
Visual Studio
VS Code (Visual Studio Code)
GitHub Copilot
Small Language Model (SLM)
Compact language models, such as Microsoft’s Phi and those from various providers, possess capabilities akin to larger Generative AI models but require significantly fewer resources. They can operate on any Nvidia-based hardware, allowing for the deployment of Small Language Models (SLMs) in diverse settings. SLMs demonstrate considerable proficiency in areas like common sense reasoning, language comprehension, and knowledge. However, they may not match the larger models in terms of world knowledge due to their size constraints.
Azure OpenAI Approved as a Service within the FedRAMP High Authorization for Azure Commercial
Microsoft’s Azure OpenAI service is now included within the US Federal Risk and Authorization Management Program (FedRAMP) High Authorization for Azure Commercial. This Provisional Authorization to Operate (P-ATO) within the existing FedRAMP High Azure Commercial environment was approved by the FedRAMP Joint Authorization Board (JAB). This milestone follows our previously announced solution enabling Azure Government customers to access Azure OpenAI Service in the commercial environment. With this latest update, agencies requiring FedRAMP High can directly access Azure OpenAI from Azure commercial.
Challenges
Challenges faced by early adopters are being addressed through ongoing efforts. Utilizing patterns and approaches such as APIM or AI landing zones can mitigate some issues:
Model Updates: Frequent modifications to the underlying Large Language Models (LLMs) can pose operational challenges.
Multilingual Scenarios: In applications supporting multiple languages, the accuracy of responses may decline, with LLMs potentially delivering mixed-language content.
Performance, HA/DR: Ensuring consistent performance in production applications that use Open AI can be challenging, with possible increased latency.
Secure Sensitive Information: To secure sensitive data, enterprises must work closely with their Office of Responsible AI during the project qualification stage, especially for sensitive AI use cases. Strict adherence to their advice on managing sensitive or explicit content is imperative. Organizations are required to follow established security principles and apply data classification labels, known as sensitivity labels, to protect documents, emails, PDFs, Teams meetings, and chats.
Cost Management: A strategy employed by customers involves using an orchestrator to determine which GPT model to invoke based on the query. Not all queries necessitate GPT-4; many can be adequately addressed with GPT-3.5, thus managing costs effectively.
In conclusion, this article will help to quickly understand the opportunities and the landscape of enabling Azure Open AI in your applications.
References
https://github.com/FreddyAyala/AzureAIServicesLandingZone
Azure/AI-in-a-Box (github.com)
GitHub – Azure/azure-openai-samples: Azure OpenAI Samples is a collection of code samples illustrating how to use Azure Open AI in creating AI solution for various use cases across industries. This repository is mained by a community of volunters. We welcomed your contributions.
Phi-2: The surprising power of small language models – Microsoft Research
semantic-kernel/dotnet/samples at main · microsoft/semantic-kernel (github.com)
Microsoft Tech Community – Latest Blogs –Read More