Month: October 2024
Launching the Arc Jumpstart Newsletter: October 2024 Edition
👋 Welcome!
We are excited to kick off this monthly newsletter, where you can get the latest updates on everything happening in the Arc Jumpstart realm. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.
💧 Arc Jumpstart Drops
Drop of the Month – “Talk to Your Factory – Optimizing Factory Operations with GenAI”
Key Features and Benefits
Smart Factory demo, enhanced by Generative AI.
See how the Smart Factory leverages Generative AI to optimize its operations!
Real-time ingestion and processing of operations data (OT): operators, manufactured products, and machine maintenance schedules.
Data Processing: Data structure following a Medallion Architecture, with the goal of incrementally and progressively improving the structure and quality of data as it flows through each layer of the architecture.
Natural Language Processing (NLP): a Smart Assistant, enhanced by Generative AI, empowers operators, so they can ask complex questions about machine operations, staff, production quality, as if they were speaking to a human expert in the Factory.
Video on the IoT Show (demo starts at 19:54)
Additional Arc Jumpstart Drops
How to Use the Key Vault Extension to Acquire Certificates on Arc-Enabled Windows Servers: This example shows how securely acquire and manage certificates using the Azure Key Vault extension on your Azure Arc-enabled servers.
Using the Multicloud Connector Enabled by Azure Arc: This example shows how to onboard EC2 instances from Amazon Web Services (AWS) using the multicloud connector enabled by Azure Arc.
⚡ Arc Architecture Posters and Diagrams (APD) Updates
The ACX ECE and the Arc Jumpstart team are happy to share the Arc Architecture Posters and Diagrams (APD) bundle release for October 2024.
This release includes a total of 74 diagrams and 6 architecture posters with 3 new and 4 updated diagrams across the entire Adaptive cloud product portfolio. This release is incl. highly requested diagrams for Arc gateway as well updated version for the all-up Arc solution overview diagram and Azure Container Storage enabled by Azure Arc (ACSA).
Release Notes
Note: Changelog file can be found in the bundle. Posters are provided in both PPTX and PDF file formats. All posters are designed to be printed on a large kanban and can be used as a great swag giveaway at various events.
Diagram Updates
Posters Updates
Bonus
This month, upon popular demand, we hosted a live stream with a behind the scenes look on how these diagrams are created and the thought process around it.
🤝 Stay Connected
Attending Ignite? Join Us for a Show Floor Interview
Join us at Ignite as we conduct Show Floor Interviews this year, spotlighting our amazing Adaptive cloud ecosystem partners!
Whether you have a booth or not, let’s connect! We are eager to engage with innovative minds from all backgrounds. If you’re a solution architect, product manager, sales engineer, technical specialist, or Microsoft MVP, we want to hear about your cutting-edge solutions and unique insights.
We will be conducting dynamic 5-7 minute interviews right at the event, which will be featured on our Jumpstart YouTube channel and LinkedIn. This is a fantastic opportunity to showcase your product to a global audience eager to learn about the latest advancements in cloud-native and hybrid solutions.
If you’re interested in highlighting your innovations and boosting your product’s visibility – whether you’re representing a booth or simply passionate about your solutions – leave a comment on this blog post or on our LinkedIn post and we will be in touch!
Join the November Adaptive Cloud Community Call
Please join us on, Wednesday, November 6th, at 8-9am PST (11-12pm EST) for our November Adaptive Cloud Community Call (invite can be found here).
Join the Adaptive Cloud Community LinkedIn Group
Join the Adaptive Cloud Community LinkedIn Group! This platform is designed for professionals who are passionate about hybrid, multi-cloud, and edge technologies.
Whether you’re just getting started or you’re a seasoned expert, this community is your space to connect, learn, and grow with others who share your interests. Together, we’ll explore the latest innovations, share insights, and tackle challenges related to Azure Arc, Azure Stack HCI, Azure IoT, AKS, and beyond.
What to Expect:
Engagement: Connect with peers and Microsoft experts to exchange ideas, best practices, and solutions.
Learning: Gain access to valuable resources, discussions, and events that will deepen your knowledge of the Azure Adaptive Cloud.
Growth: Participate in discussions and activities that will help you grow your skills and contribute to the future of cloud computing.
Microsoft Tech Community – Latest Blogs –Read More
Changes to file open behavior for Word, Excel, and PowerPoint files on Outlook iOS and Android
We are making a change to default file open operations that take place on Android and iOS devices when users open Word, Excel, PowerPoint files from Outlook, OneDrive, and Teams.
Customers have let us know that there is uncertainty about how files should be opened considering that the Microsoft 365 mobile app has the built-in ability to open and create Word, Excel, and PowerPoint files. To address that feedback, we plan to standardize the file open behaviors across the Outlook, OneDrive and Teams mobile app experiences to behave in the following way:
Configuration
Previous/Current Behavior
Planned Behavior
Users have both Microsoft 365 app and standalone Word, Excel, and PowerPoint apps installed
Microsoft 365 app would typically handle file opens
Standalone Word, Excel, PowerPoint will handle their respective file open actions
Users have the Microsoft 365 app installed but not the standalone Word, Excel, and PowerPoint apps.
Microsoft 365 app handles file open
Microsoft 365 app handles file open
Users don’t have Microsoft 365 app or the standalone Word, Excel, PowerPoint apps installed
Users directed to the App Store / Google Play to download Microsoft 365
Users directed to the App Store / Google Play to download standalone Word, Excel, PowerPoint app
Why This Change?
Our goal is to improve predictability for how files get opened when starting from our key “hub” experiences on mobile devices (OneDrive, Outlook, Teams) – the standalone Word, Excel, and PowerPoint apps if installed will essentially be favored to handle file open operations. For customers who want to open more than one Word, Excel, or PowerPoint file at once, the standalone apps can better handle side by side and windowing scenarios that modern tablet and mobile operating systems support.
If you are a commercial customer and want your employees to use only the combined Microsoft 365 app – you can implement MDM policy to restrict install of the standalone apps on your managed mobile devices and in this way your users will be launching the Microsoft 365 app for file open actions.
Timelines for this change
As of October, 2024 we are offering this guidance for when you will start to see these changes:
OneDrive iOS and Android – changes are already in place
Outlook iOS and Android – rolling out in October through November
Teams iOS and Android – timing is being finalized
We Value Your Feedback
Your feedback is crucial to us. We are committed to continuously improving your experience with the Microsoft 365 suite of products. If you have any questions or need assistance, please don’t hesitate to reach out to our support team.
Thank you for being a valued Microsoft customer. We look forward to bringing you more updates and enhancements in the future.
Microsoft Tech Community – Latest Blogs –Read More
Poor query performance on one of the instances of SQL Server – Identical plan
We have a newly build server where one of the queries is taking 110ms and while in rest of the servers it is taking 6 ms. The query plan and the dataset are identical. The hardware configuration is also identical.
If we turn this query into a stored procedure then the performance is 6ms. If we pad this query with comments and bulk up the query then the elapsed time balloons to 200ms or more. With a padded comment the performance is still good in the rest of the good servers.
There is a minor difference in the version of SQL Server. On the good instances it is CU7 and where as on the not so good instance it is CU9.
We looked into different DMVs to look for any anomalies and couldn’t find one.
Wondering if there is something that we can do to investigate where the bottleneck is. Needless to say we are fairly new to SQL Server.
We have a newly build server where one of the queries is taking 110ms and while in rest of the servers it is taking 6 ms. The query plan and the dataset are identical. The hardware configuration is also identical. If we turn this query into a stored procedure then the performance is 6ms. If we pad this query with comments and bulk up the query then the elapsed time balloons to 200ms or more. With a padded comment the performance is still good in the rest of the good servers. There is a minor difference in the version of SQL Server. On the good instances it is CU7 and where as on the not so good instance it is CU9. We looked into different DMVs to look for any anomalies and couldn’t find one. Wondering if there is something that we can do to investigate where the bottleneck is. Needless to say we are fairly new to SQL Server. Read More
Users can’t access Viva Goals
Hi- recently a number of our users suddenly can no longer access Viva Goals, either through Teams or web browser. On the browser, both a direct link to Viva Goals or going through the Office Portal gives them similar issues.
Here are a few error codes and screenshots from various affected users:
Error code:
20241009T132715Z-17bd99658d9cdh6wcrg61u11h40000000m5g00000000bv8q
Error code:
20241004T135842Z-15767c5fc55472x4k7dmphmadg0000000c7g00000000hx41
Error code:
20241003T173249Z-17bd99658d9vhcxnmvkgw4dz8c0000000a5g00000000cm6
Error code:
20241008T190645Z-16b659b4499f652swtf15xkd7s00000000ng000000003n7q
Confirming they all have licenses applied to their accounts on Admin Center and are in our one and only organization. I also don’t see anyone on the user-led trials being affected by this.
+ we have Teams Admin Center granting access to Viva Goals org-wide.
Has anyone else experienced this or know how to solve?
I personally can’t see a pattern to see what might be causing this…
Hi- recently a number of our users suddenly can no longer access Viva Goals, either through Teams or web browser. On the browser, both a direct link to Viva Goals or going through the Office Portal gives them similar issues. Here are a few error codes and screenshots from various affected users:Error code:20241009T132715Z-17bd99658d9cdh6wcrg61u11h40000000m5g00000000bv8qError code:20241004T135842Z-15767c5fc55472x4k7dmphmadg0000000c7g00000000hx41Error code:20241003T173249Z-17bd99658d9vhcxnmvkgw4dz8c0000000a5g00000000cm6Error code:20241008T190645Z-16b659b4499f652swtf15xkd7s00000000ng000000003n7q Confirming they all have licenses applied to their accounts on Admin Center and are in our one and only organization. I also don’t see anyone on the user-led trials being affected by this.+ we have Teams Admin Center granting access to Viva Goals org-wide.Has anyone else experienced this or know how to solve?I personally can’t see a pattern to see what might be causing this… Read More
Excel Labs Array Module – What are your thoughts?
Sorry for what will seem like a code dump, but I’m curious if any of you have tried to create similar modules or if you have any comments/wisdom to share about the current incarnation of my module. I recently ran it through chatgpt, so I’m not sure if it slipped in any errors – the structure should, however, be largely accurate and give enough details to let you understand what I tried to accomplish.
Do you notice any shortcomings, obvious enhancements, or alternate approaches to the functions? In particular, I am always concerned about the alternate ways to handle array functions and whether my intuition about formula efficiencies is in the right place.
If there is enough interest, I may share some of my other modules.
// arr module
// This module provides a suite of array manipulation functions to enhance and extend Excel’s native capabilities.
// The functions are scoped under the `arr.` namespace to prevent naming conflicts with Excel’s built-in functions, ensuring reliable use throughout any workbook.
// Function names have been carefully selected to avoid ambiguity or collision with Excel’s native features, especially when referenced internally without the `arr.` prefix.
// Below is an overview of the functions provided in this module, organized by their core functionalities:
// 1. Basic Information (Public Interface)
// These functions provide basic array analysis and selection tools.
// – dimensions: Returns the number of rows and columns in an array, optionally including headers.
// – getColumnIdxByName: Retrieves column indices from an array based on header names.
// – uniqueElements: Extracts unique elements from an array, returning them either as a row or column.
// – countsByElement: Counts occurrences of elements in an array with options for ignoring blanks, errors, and sorting.
// 2. Comparisons (Public Interface)
// Functions that allow for comparison between arrays and columns.
// – areEqualDimension: Checks if two arrays have equal dimensions (width, height, or size).
// – compareColumns: Compares columns of an array based on a value and a specified operator.
// – getDiffDimensionFunc: Calculates the difference in dimensions (width, height, or size) between two arrays.
// 3. Miscellaneous Functions (Public Interface)
// General functions for filling arrays and creating values.
// – fillArray: Fills an array with specified text over a defined number of rows and columns.
// 4. Core Operations (Public Interface)
// These high-level array manipulation functions are designed for direct user interaction and support common array tasks.
// Basic Combination and Addition
// – stack: Stack two arrays either vertically or horizontally.
// – stackOn: Stack arrays with user-specified placement (e.g., above, below, left, right).
// – stackAndExpand: Stack two arrays while expanding dimensions to match as needed.
// Subset Selection and Deletion
// – sliceCols: Extract or remove specific columns from an array.
// – sliceRows: Extract or remove specific rows from an array.
// – trimValue: Trim specified values (e.g., blanks) from rows or columns.
// 5. Complex Transformations (Public Interface)
// These functions enable higher-level array manipulations such as flattening, replacing, or transforming data.
// – flatten: Convert a two-dimensional array into a one-dimensional list, with options for sorting and filtering.
// – replaceBlankCells: Replace blank cells in an array with a specified value.
// – replaceCell: Replace specific values in an array based on a condition.
// – replaceCols: Replace or insert entire columns in an array with options to match dimensions.
// – replaceRows: Replace or insert entire rows in an array with options to match dimensions.
// 6. Helper Functions (Internal Use)
// These internal-use functions assist with specific operations and are prefixed with an underscore to denote their private nature.
// Dimension and Size Helpers
// – _areSameHeight: Checks if two arrays have the same height.
// – _areSameWidth: Checks if two arrays have the same width.
// – _areSameSize: Checks if two arrays have the same size.
// – _ensureHeight: Ensure an array has the same or greater height than a reference array.
// – _ensureWidth: Ensure an array has the same or greater width than a reference array.
// – _diffHeight: Calculates the height difference between two arrays.
// – _diffWidth: Calculates the width difference between two arrays.
// – _diffSize: Calculates the size difference (width and height) between two arrays.
// – _maxHeight: Gets the maximum height between two arrays.
// – _maxWidth: Gets the maximum width between two arrays.
// Stacking Logic Helpers
// – _stackSwitch: Determines stacking behavior (e.g., above, below, left, right) based on user input.
// – _stackAndExpandSwitch: Expands dimensions as necessary before stacking based on user preference.
// – _stackAndExpandHeight: Expands and stacks arrays by height.
// – _stackAndExpandWidth: Expands and stacks arrays by width.
// – _stackAndExpandAllDimensions: Expands and stacks arrays in both dimensions (width and height).
// Basic Information
dimensions =
lambda(
target_array,
[show_names_df_FALSE],
if(
if(
isomitted(show_names_df_FALSE),
FALSE,
show_names_df_FALSE
),
vstack(hstack(“rows”, “columns”), hstack(rows(target_array), columns(target_array))),
hstack(rows(target_array), columns(target_array))
)
);
getColumnIdxByName =
lambda(
array_with_headers,
column_names_row,
hstack(bycol(column_names_row, lambda(column_name, match(column_name, take(array_with_headers,1),0))))
);
uniqueElements =
lambda(
target_array,
[return_as_col_bool_df_TRUE],
trimValue(unique(flatten(target_array, return_as_col_bool_df_TRUE)))
);
countsByElement =
lambda(
target_array,
[search_array_df_SELF],
[show_element_values_df_FALSE],
[ignore_blanks_df_FALSE],
[ignore_errors_df_FALSE],
[sort_elements_df_0],
[traverse_cols_first_df_TRUE],
let(
flattened_target_array, flatten(target_array,,ignore_blanks_df_FALSE,ignore_errors_df_FALSE,,sort_elements_df_0,traverse_cols_first_df_TRUE),
flattened_search_array, if(isomitted(search_array_df_SELF), flattened_target_array, flatten(search_array_df_SELF)),
elements, unique(flattened_target_array),
pre_result,
byrow(
elements,
lambda(
element,
iferror(rows(filter(flattened_search_array, flattened_search_array=element)),0)
)
),
result,
if(
if(
isomitted(show_element_values_df_FALSE),FALSE,show_element_values_df_FALSE
),
hstack(elements, pre_result),
pre_result
),
result
)
);
// Comparisons
areEqualDimension = LAMBDA(dimension, array1, array2,
SWITCH(
dimension,
“width”, _areSameWidth(array1, array2),
“height”, _areSameHeight(array1, array2),
“size”, _areSameSize(array1, array2),
ERROR.TYPE(3)
)
);
compareColumns= LAMBDA(value_row, array_for_comparison, [comparison_operator], [comparison_col_idx], [value_col_idx],
LET(
operator, IF(ISOMITTED(comparison_operator), “=”, comparison_operator),
comp_func, mask.comparisonFunc(operator), // getCompFunc will return #VALUE! for invalid operators
col_idx, IF(ISOMITTED(comparison_col_idx), 1, comparison_col_idx),
val_idx, IF(ISOMITTED(value_col_idx), 1, value_col_idx),
comp_value, IF(COLUMNS(value_row) > 1, CHOOSECOLS(value_row, val_idx), value_row),
comp_array, CHOOSECOLS(array_for_comparison, col_idx),
IF(comp_func = ERROR.TYPE(3), ERROR.TYPE(3), comp_func(comp_value, comp_array)) // Propagate #VALUE! if operator is invalid
)
);
getDiffDimensionFunc = LAMBDA(dimension, array1, array2,
SWITCH(
dimension,
“width”, _diffWidth(array1, array2),
“height”, _diffHeight(array1, array2),
“size”, _diffSize(array1, array2),
ERROR.TYPE(3)
)
);
// Miscellaneous functions
fillArray = LAMBDA(r, c, txt, MAKEARRAY(r, c, LAMBDA(row, col, txt)));
// Stack Functions
stack = lambda(array_1, array_2, [vstack_bool_df_TRUE],
if(
if(
isomitted(vstack_bool_df_TRUE),
TRUE,
vstack_bool_df_TRUE
),
vstack(array_1, array_2),
hstack(array_1, array_2)
)
);
stackOn =
lambda(
array_to_stack, fixed_array, [stack_placement_df_RIGHT], [match_shared_dimensions_df_TRUE], [fill_value_df_DBQT],
let(
match_shared_dimension, if(isomitted(match_shared_dimensions_df_TRUE),TRUE,match_shared_dimensions_df_TRUE),
result,
if(
match_shared_dimension,
_stackAndExpandSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT, fill_value_df_DBQT),
_stackSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT)
),
result
)
);
stackAndExpand =
lambda(array1, array2, [exp_width_bool_df_TRUE], [fill_value_df_blank], [exp_height_bool_df_TRUE], [vstack_bool_df_TRUE],
let(
expand_width, IF(ISOMITTED(exp_width_bool_df_TRUE), TRUE, exp_width_bool_df_TRUE),
expand_height, IF(ISOMITTED(exp_height_bool_df_TRUE), TRUE, exp_height_bool_df_TRUE),
stack_bool, if(ISOMITTED(vstack_bool_df_TRUE), TRUE, vstack_bool_df_TRUE),
result,
ifs(
expand_height * expand_width,
_stackAndExpandAllDimensions(array1, array2, fill_value_df_blank, stack_bool),
expand_height,
_stackAndExpandHeight(array1, array2, fill_value_df_blank, stack_bool),
expand_width,
_stackAndExpandWidth(array1, array2, fill_value_df_blank, stack_bool),
1,
ERROR.TYPE(3)
),
result
)
);
// Subset selection and Deletion
getColumnsByName =
lambda(
array_with_headers,
column_names_row,
choosecols(drop(array_with_headers,1),getColumnIdxByName(array_with_headers,column_names_row))
);
getNonZeroCells = LAMBDA(target_row_or_col,
LET(is_not_zero, is.notZero(target_row_or_col), FILTER(target_row_or_col, is_not_zero, “”))
);
sliceCols =
LAMBDA(
original_array,
no_columns_to_drop,
[no_of_columns_to_take],
[no_columns_to_drop_from_end],
LET(
after_first_drop, DROP(original_array, , no_columns_to_drop),
after_take,
IF(
ISOMITTED(no_of_columns_to_take),
after_first_drop,
TAKE(after_first_drop, , no_of_columns_to_take)
),
after_second_drop,
IF(
ISOMITTED(no_columns_to_drop_from_end),
after_take,
DROP(after_take, ,-no_columns_to_drop_from_end)
),
after_second_drop
)
);
sliceRows =
LAMBDA(
original_array,
no_rows_to_drop,
[no_rows_to_take],
[no_rows_to_drop_from_end],
LET(
after_first_drop, DROP(original_array, no_rows_to_drop),
after_take,
IF(
ISOMITTED(no_rows_to_take),
after_first_drop,
TAKE(after_first_drop, no_rows_to_take)
),
after_second_drop,
IF(
ISOMITTED(no_rows_to_drop_from_end),
after_take,
DROP(after_take, ,-no_rows_to_drop_from_end)
),
after_second_drop
)
);
trimValue =
lambda(
target_row_or_col,
[trim_value_df_BLANK],
let(
trim_mask,
if(
isomitted(trim_value_df_BLANK),
not(isblank(target_row_or_col)),
not(target_row_or_col = trim_value_df_BLANK)
),
filter(target_row_or_col, trim_mask,””)
)
);
// Complex Transformations
flatten = LAMBDA(
target_array,
[return_as_column_bool_df_TRUE],
[ignore_blanks_df_FALSE],
[ignore_errors_df_FALSE],
[unique_elements_only_df_FALSE],
[sort_elements_df_0],
[traverse_cols_first_df_TRUE],
LET(
make_column_bool,
IF(ISOMITTED(return_as_column_bool_df_TRUE), TRUE, return_as_column_bool_df_TRUE),
ignore_blanks,
IF(ISOMITTED(ignore_blanks_df_FALSE), FALSE, ignore_blanks_df_FALSE),
ignore_errors,
IF(ISOMITTED(ignore_errors_df_FALSE), FALSE, ignore_errors_df_FALSE),
ignore_value,
(ignore_blanks * 1) + (ignore_errors * 2),
traverse_cols_first,
if(isomitted(traverse_cols_first_df_TRUE),TRUE,traverse_cols_first_df_TRUE),
pre_result,
IF(
make_column_bool,
TOCOL(target_array, ignore_value, traverse_cols_first),
TOROW(target_array, ignore_value, traverse_cols_first)
),
unique_elements_only_bool,
if(isomitted(unique_elements_only_df_FALSE), FALSE, unique_elements_only_df_FALSE),
sort_elements_value,
if(isomitted(sort_elements_df_0), 0, sort_elements_df_0),
after_unique_result,
if(unique_elements_only_bool, unique(pre_result), pre_result),
after_sort_result,
switch(
sort_elements_value,
0,
after_unique_result,
1,
sort(after_unique_result),
-1,
sort(after_unique_result,, -1),
error.type(3)
),
after_sort_result
)
);
replaceBlankCells =
LAMBDA(
array,
[replacement_value],
MAP(
array,
LAMBDA(
cur_cell,
IF(
ISBLANK(cur_cell),
IF(ISOMITTED(replacement_value), “”, replacement_value),
cur_cell
)
)
)
);
replaceCell =
LAMBDA(
array,
target_cell_value,
replacement_value,
[comparison_operator],
MAP(
array,
LAMBDA(
cur_cell_value,
let(
comparison_func,
IF(
ISOMITTED(comparison_operator),
mask.comparisonFunc(“=”),
mask.comparisonFunc(comparison_operator)
),
comparison_result, comparison_func(cur_cell_value, target_cell_value),
if(
comparison_result,
replacement_value,
target_cell_value
)
)
)
)
);
replaceCols =
LAMBDA(
replacement_cols,
original_array,
[target_col_idx],
[insert_bool_default_false],
[trim_to_orig_size_bool_df_FALSE],
[expand_replacement_cols_to_match_rows_df_TRUE],
[expand_original_cols_to_match_rows_df_TRUE],
LET(
col_idx, IF(ISOMITTED(target_col_idx), 1, target_col_idx),
orig_cols, columns(original_array),
insert_bool,
IF(
ISOMITTED(insert_bool_default_false),
FALSE,
insert_bool_default_false
),
adj_orig_array,
if(
if(
isomitted(expand_original_cols_to_match_rows_df_TRUE),
TRUE,
expand_original_cols_to_match_rows_df_TRUE
),
_ensureHeight(replacement_cols,original_array),
original_array
),
adj_replacement_cols,
if(
if(
isomitted(expand_replacement_cols_to_match_rows_df_TRUE),
TRUE,
expand_replacement_cols_to_match_rows_df_TRUE
),
_ensureHeight(original_array,replacement_cols),
replacement_cols
),
first_part,
IF(
col_idx > 1,
hSTACK(TAKE(adj_orig_array, ,col_idx – 1), adj_replacement_cols),
adj_replacement_cols
),
drop_cols,
if(
orig_cols>=col_idx,
if(
insert_bool,
col_idx-1,
col_idx+columns(adj_replacement_cols)-1
),
0
),
combined_parts,
IF(
or(drop_cols=0,drop_cols>orig_cols),
first_part,
hstack(first_part, drop(adj_orig_array, ,drop_cols))
),
if(
if(
isomitted(trim_to_orig_size_bool_df_FALSE),
FALSE,
trim_to_orig_size_bool_df_FALSE
),
take(combined_parts, ,orig_cols),
combined_parts
)
)
);
replaceRows =
LAMBDA(
replacement_rows,
original_array,
[target_row_idx],
[insert_bool_df_false],
[trim_to_orig_size_bool_df_FALSE],
[expand_replacement_rows_to_match_cols_df_TRUE],
[expand_original_rows_to_match_cols_df_TRUE],
LET(
row_idx, IF(ISOMITTED(target_row_idx), 1, target_row_idx),
orig_rows, rows(original_array),
insert_bool,
IF(
ISOMITTED(insert_bool_df_false),
FALSE,
insert_bool_df_false
),
adj_orig_array,
if(
if(
isomitted(expand_original_rows_to_match_cols_df_TRUE),
TRUE,
expand_original_rows_to_match_cols_df_TRUE
),
_ensureWidth(replacement_rows, original_array),
original_array
),
adj_replacement_rows,
if(
if(
isomitted(expand_replacement_rows_to_match_cols_df_TRUE),
TRUE,
expand_replacement_rows_to_match_cols_df_TRUE
),
_ensureWidth(original_array,replacement_rows),
replacement_rows
),
first_part,
IF(
row_idx > 1,
VSTACK(TAKE(adj_orig_array, row_idx – 1), adj_replacement_rows),
adj_replacement_rows
),
drop_rows,
if(
rows(adj_orig_array)>=row_idx,
if(
insert_bool,
row_idx-1,
row_idx+rows(adj_replacement_rows)-1
),
0
),
combined_parts,
IF(
drop_rows<=0,
first_part,
vstack(first_part, drop(adj_orig_array, drop_rows))
),
result,
if(
if(
isomitted(trim_to_orig_size_bool_df_FALSE),
FALSE,
trim_to_orig_size_bool_df_FALSE
),
take(combined_parts, orig_rows),
combined_parts
),
result
)
);
// Dimension and Size Helpers
_areSameHeight = LAMBDA(array1, array2,
ROWS(array1) = ROWS(array2)
);
_areSameWidth = LAMBDA(array1, array2,
COLUMNS(array1) = COLUMNS(array2)
);
_areSameSize = LAMBDA(array1, array2,
AND(_areSameWidth(array1, array2), _areSameHeight(array1, array2))
);
_ensureHeight =
lambda(
reference_array,
expansion_array,
[fill_value_df_DBLQT],
expand(
expansion_array,
max(rows(reference_array), rows(expansion_array)),,
if(isomitted(fill_value_df_DBLQT), “”,fill_value_df_DBLQT)
)
);
_ensureWidth =
lambda(
reference_array,
expansion_array,
[fill_value_df_DBLQT],
expand(
expansion_array, ,
max(columns(reference_array), columns(expansion_array)),
if(isomitted(fill_value_df_DBLQT), “”,fill_value_df_DBLQT)
)
);
_diffHeight = LAMBDA(array1, array2,
ROWS(array1) – ROWS(array2)
);
_diffWidth = LAMBDA(array1, array2,
COLUMNS(array1) – COLUMNS(array2)
);
_diffSize = LAMBDA(array1, array2,
HSTACK(_diffHeight(array1, array2), _diffWidth(array1, array2))
);
_maxHeight = LAMBDA(arr_1, arr_2,
LET(
arr_1_height, ROWS(arr_1),
arr_2_height, ROWS(arr_2),
max_height, MAX(arr_1_height, arr_2_height),
max_height
)
);
_maxWidth = LAMBDA(arr_1, arr_2,
LET(
arr_1_width, COLUMNS(arr_1),
arr_2_width, COLUMNS(arr_2),
max_width, MAX(arr_1_width, arr_2_width),
max_width
)
);
// Stacking Logic Helpers
_stackSwitch =
lambda(
array_to_stack, fixed_array, stack_placement_df_RIGHT,
switch(
if(isomitted(stack_placement_df_RIGHT),”right”,stack_placement_df_RIGHT),
“above”,
vstack(array_to_stack, fixed_array),
“below”,
vstack(fixed_array, array_to_stack),
“left”,
hstack(array_to_stack, fixed_array),
“right”,
hstack(fixed_array, array_to_stack),
error.type(3)
)
);
_stackAndExpandSwitch =
lambda(
array_to_stack, fixed_array, stack_placement_df_RIGHT, [fill_value_df_DBQT],
switch(
if(isomitted(stack_placement_df_RIGHT),”right”,stack_placement_df_RIGHT),
“above”,
_stackAndExpandWidth(array_to_stack, fixed_array,fill_value_df_DBQT),
“below”,
_stackAndExpandWidth(fixed_array, array_to_stack, fill_value_df_DBQT),
“left”,
_stackAndExpandHeight(array_to_stack, fixed_array,fill_value_df_DBQT),
“right”,
_stackAndExpandHeight(fixed_array, array_to_stack, fill_value_df_DBQT),
error.type(3)
)
);
_stackAndExpandHeight =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_FALSE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_FALSE),
FALSE,
vstack_bool_df_FALSE
),
expanded_array_1, EXPAND(array_1, max_height, , fill_char),
expanded_array_2, EXPAND(array_2, max_height, , fill_char),
stack(expanded_array_1, expanded_array_2, stack_bool)
)
);
_stackAndExpandWidth =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_TRUE),
FALSE,
vstack_bool_df_TRUE
),
expanded_array_1, EXPAND(array_1, , max_width, fill_char),
expanded_array_2, EXPAND(array_2, , max_width, fill_char),
stack(expanded_array_1, expanded_array_2, stack_bool)
)
);
_stackAndExpandAllDimensions =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_TRUE),
TRUE,
vstack_bool_df_TRUE
),
expanded_array_1, EXPAND(array_1, max_height, max_width, fill_char),
expanded_array_2, EXPAND(array_2, max_height, max_width, fill_char),
if(stack_bool, vstack(expanded_array_1, expanded_array_2), hstack(expanded_array_1, expanded_array_2))
)
);
Sorry for what will seem like a code dump, but I’m curious if any of you have tried to create similar modules or if you have any comments/wisdom to share about the current incarnation of my module. I recently ran it through chatgpt, so I’m not sure if it slipped in any errors – the structure should, however, be largely accurate and give enough details to let you understand what I tried to accomplish. Do you notice any shortcomings, obvious enhancements, or alternate approaches to the functions? In particular, I am always concerned about the alternate ways to handle array functions and whether my intuition about formula efficiencies is in the right place. If there is enough interest, I may share some of my other modules. // arr module
// This module provides a suite of array manipulation functions to enhance and extend Excel’s native capabilities.
// The functions are scoped under the `arr.` namespace to prevent naming conflicts with Excel’s built-in functions, ensuring reliable use throughout any workbook.
// Function names have been carefully selected to avoid ambiguity or collision with Excel’s native features, especially when referenced internally without the `arr.` prefix.
// Below is an overview of the functions provided in this module, organized by their core functionalities:
// 1. Basic Information (Public Interface)
// These functions provide basic array analysis and selection tools.
// – dimensions: Returns the number of rows and columns in an array, optionally including headers.
// – getColumnIdxByName: Retrieves column indices from an array based on header names.
// – uniqueElements: Extracts unique elements from an array, returning them either as a row or column.
// – countsByElement: Counts occurrences of elements in an array with options for ignoring blanks, errors, and sorting.
// 2. Comparisons (Public Interface)
// Functions that allow for comparison between arrays and columns.
// – areEqualDimension: Checks if two arrays have equal dimensions (width, height, or size).
// – compareColumns: Compares columns of an array based on a value and a specified operator.
// – getDiffDimensionFunc: Calculates the difference in dimensions (width, height, or size) between two arrays.
// 3. Miscellaneous Functions (Public Interface)
// General functions for filling arrays and creating values.
// – fillArray: Fills an array with specified text over a defined number of rows and columns.
// 4. Core Operations (Public Interface)
// These high-level array manipulation functions are designed for direct user interaction and support common array tasks.
// Basic Combination and Addition
// – stack: Stack two arrays either vertically or horizontally.
// – stackOn: Stack arrays with user-specified placement (e.g., above, below, left, right).
// – stackAndExpand: Stack two arrays while expanding dimensions to match as needed.
// Subset Selection and Deletion
// – sliceCols: Extract or remove specific columns from an array.
// – sliceRows: Extract or remove specific rows from an array.
// – trimValue: Trim specified values (e.g., blanks) from rows or columns.
// 5. Complex Transformations (Public Interface)
// These functions enable higher-level array manipulations such as flattening, replacing, or transforming data.
// – flatten: Convert a two-dimensional array into a one-dimensional list, with options for sorting and filtering.
// – replaceBlankCells: Replace blank cells in an array with a specified value.
// – replaceCell: Replace specific values in an array based on a condition.
// – replaceCols: Replace or insert entire columns in an array with options to match dimensions.
// – replaceRows: Replace or insert entire rows in an array with options to match dimensions.
// 6. Helper Functions (Internal Use)
// These internal-use functions assist with specific operations and are prefixed with an underscore to denote their private nature.
// Dimension and Size Helpers
// – _areSameHeight: Checks if two arrays have the same height.
// – _areSameWidth: Checks if two arrays have the same width.
// – _areSameSize: Checks if two arrays have the same size.
// – _ensureHeight: Ensure an array has the same or greater height than a reference array.
// – _ensureWidth: Ensure an array has the same or greater width than a reference array.
// – _diffHeight: Calculates the height difference between two arrays.
// – _diffWidth: Calculates the width difference between two arrays.
// – _diffSize: Calculates the size difference (width and height) between two arrays.
// – _maxHeight: Gets the maximum height between two arrays.
// – _maxWidth: Gets the maximum width between two arrays.
// Stacking Logic Helpers
// – _stackSwitch: Determines stacking behavior (e.g., above, below, left, right) based on user input.
// – _stackAndExpandSwitch: Expands dimensions as necessary before stacking based on user preference.
// – _stackAndExpandHeight: Expands and stacks arrays by height.
// – _stackAndExpandWidth: Expands and stacks arrays by width.
// – _stackAndExpandAllDimensions: Expands and stacks arrays in both dimensions (width and height).
// Basic Information
dimensions =
lambda(
target_array,
[show_names_df_FALSE],
if(
if(
isomitted(show_names_df_FALSE),
FALSE,
show_names_df_FALSE
),
vstack(hstack(“rows”, “columns”), hstack(rows(target_array), columns(target_array))),
hstack(rows(target_array), columns(target_array))
)
);
getColumnIdxByName =
lambda(
array_with_headers,
column_names_row,
hstack(bycol(column_names_row, lambda(column_name, match(column_name, take(array_with_headers,1),0))))
);
uniqueElements =
lambda(
target_array,
[return_as_col_bool_df_TRUE],
trimValue(unique(flatten(target_array, return_as_col_bool_df_TRUE)))
);
countsByElement =
lambda(
target_array,
[search_array_df_SELF],
[show_element_values_df_FALSE],
[ignore_blanks_df_FALSE],
[ignore_errors_df_FALSE],
[sort_elements_df_0],
[traverse_cols_first_df_TRUE],
let(
flattened_target_array, flatten(target_array,,ignore_blanks_df_FALSE,ignore_errors_df_FALSE,,sort_elements_df_0,traverse_cols_first_df_TRUE),
flattened_search_array, if(isomitted(search_array_df_SELF), flattened_target_array, flatten(search_array_df_SELF)),
elements, unique(flattened_target_array),
pre_result,
byrow(
elements,
lambda(
element,
iferror(rows(filter(flattened_search_array, flattened_search_array=element)),0)
)
),
result,
if(
if(
isomitted(show_element_values_df_FALSE),FALSE,show_element_values_df_FALSE
),
hstack(elements, pre_result),
pre_result
),
result
)
);
// Comparisons
areEqualDimension = LAMBDA(dimension, array1, array2,
SWITCH(
dimension,
“width”, _areSameWidth(array1, array2),
“height”, _areSameHeight(array1, array2),
“size”, _areSameSize(array1, array2),
ERROR.TYPE(3)
)
);
compareColumns= LAMBDA(value_row, array_for_comparison, [comparison_operator], [comparison_col_idx], [value_col_idx],
LET(
operator, IF(ISOMITTED(comparison_operator), “=”, comparison_operator),
comp_func, mask.comparisonFunc(operator), // getCompFunc will return #VALUE! for invalid operators
col_idx, IF(ISOMITTED(comparison_col_idx), 1, comparison_col_idx),
val_idx, IF(ISOMITTED(value_col_idx), 1, value_col_idx),
comp_value, IF(COLUMNS(value_row) > 1, CHOOSECOLS(value_row, val_idx), value_row),
comp_array, CHOOSECOLS(array_for_comparison, col_idx),
IF(comp_func = ERROR.TYPE(3), ERROR.TYPE(3), comp_func(comp_value, comp_array)) // Propagate #VALUE! if operator is invalid
)
);
getDiffDimensionFunc = LAMBDA(dimension, array1, array2,
SWITCH(
dimension,
“width”, _diffWidth(array1, array2),
“height”, _diffHeight(array1, array2),
“size”, _diffSize(array1, array2),
ERROR.TYPE(3)
)
);
// Miscellaneous functions
fillArray = LAMBDA(r, c, txt, MAKEARRAY(r, c, LAMBDA(row, col, txt)));
// Stack Functions
stack = lambda(array_1, array_2, [vstack_bool_df_TRUE],
if(
if(
isomitted(vstack_bool_df_TRUE),
TRUE,
vstack_bool_df_TRUE
),
vstack(array_1, array_2),
hstack(array_1, array_2)
)
);
stackOn =
lambda(
array_to_stack, fixed_array, [stack_placement_df_RIGHT], [match_shared_dimensions_df_TRUE], [fill_value_df_DBQT],
let(
match_shared_dimension, if(isomitted(match_shared_dimensions_df_TRUE),TRUE,match_shared_dimensions_df_TRUE),
result,
if(
match_shared_dimension,
_stackAndExpandSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT, fill_value_df_DBQT),
_stackSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT)
),
result
)
);
stackAndExpand =
lambda(array1, array2, [exp_width_bool_df_TRUE], [fill_value_df_blank], [exp_height_bool_df_TRUE], [vstack_bool_df_TRUE],
let(
expand_width, IF(ISOMITTED(exp_width_bool_df_TRUE), TRUE, exp_width_bool_df_TRUE),
expand_height, IF(ISOMITTED(exp_height_bool_df_TRUE), TRUE, exp_height_bool_df_TRUE),
stack_bool, if(ISOMITTED(vstack_bool_df_TRUE), TRUE, vstack_bool_df_TRUE),
result,
ifs(
expand_height * expand_width,
_stackAndExpandAllDimensions(array1, array2, fill_value_df_blank, stack_bool),
expand_height,
_stackAndExpandHeight(array1, array2, fill_value_df_blank, stack_bool),
expand_width,
_stackAndExpandWidth(array1, array2, fill_value_df_blank, stack_bool),
1,
ERROR.TYPE(3)
),
result
)
);
// Subset selection and Deletion
getColumnsByName =
lambda(
array_with_headers,
column_names_row,
choosecols(drop(array_with_headers,1),getColumnIdxByName(array_with_headers,column_names_row))
);
getNonZeroCells = LAMBDA(target_row_or_col,
LET(is_not_zero, is.notZero(target_row_or_col), FILTER(target_row_or_col, is_not_zero, “”))
);
sliceCols =
LAMBDA(
original_array,
no_columns_to_drop,
[no_of_columns_to_take],
[no_columns_to_drop_from_end],
LET(
after_first_drop, DROP(original_array, , no_columns_to_drop),
after_take,
IF(
ISOMITTED(no_of_columns_to_take),
after_first_drop,
TAKE(after_first_drop, , no_of_columns_to_take)
),
after_second_drop,
IF(
ISOMITTED(no_columns_to_drop_from_end),
after_take,
DROP(after_take, ,-no_columns_to_drop_from_end)
),
after_second_drop
)
);
sliceRows =
LAMBDA(
original_array,
no_rows_to_drop,
[no_rows_to_take],
[no_rows_to_drop_from_end],
LET(
after_first_drop, DROP(original_array, no_rows_to_drop),
after_take,
IF(
ISOMITTED(no_rows_to_take),
after_first_drop,
TAKE(after_first_drop, no_rows_to_take)
),
after_second_drop,
IF(
ISOMITTED(no_rows_to_drop_from_end),
after_take,
DROP(after_take, ,-no_rows_to_drop_from_end)
),
after_second_drop
)
);
trimValue =
lambda(
target_row_or_col,
[trim_value_df_BLANK],
let(
trim_mask,
if(
isomitted(trim_value_df_BLANK),
not(isblank(target_row_or_col)),
not(target_row_or_col = trim_value_df_BLANK)
),
filter(target_row_or_col, trim_mask,””)
)
);
// Complex Transformations
flatten = LAMBDA(
target_array,
[return_as_column_bool_df_TRUE],
[ignore_blanks_df_FALSE],
[ignore_errors_df_FALSE],
[unique_elements_only_df_FALSE],
[sort_elements_df_0],
[traverse_cols_first_df_TRUE],
LET(
make_column_bool,
IF(ISOMITTED(return_as_column_bool_df_TRUE), TRUE, return_as_column_bool_df_TRUE),
ignore_blanks,
IF(ISOMITTED(ignore_blanks_df_FALSE), FALSE, ignore_blanks_df_FALSE),
ignore_errors,
IF(ISOMITTED(ignore_errors_df_FALSE), FALSE, ignore_errors_df_FALSE),
ignore_value,
(ignore_blanks * 1) + (ignore_errors * 2),
traverse_cols_first,
if(isomitted(traverse_cols_first_df_TRUE),TRUE,traverse_cols_first_df_TRUE),
pre_result,
IF(
make_column_bool,
TOCOL(target_array, ignore_value, traverse_cols_first),
TOROW(target_array, ignore_value, traverse_cols_first)
),
unique_elements_only_bool,
if(isomitted(unique_elements_only_df_FALSE), FALSE, unique_elements_only_df_FALSE),
sort_elements_value,
if(isomitted(sort_elements_df_0), 0, sort_elements_df_0),
after_unique_result,
if(unique_elements_only_bool, unique(pre_result), pre_result),
after_sort_result,
switch(
sort_elements_value,
0,
after_unique_result,
1,
sort(after_unique_result),
-1,
sort(after_unique_result,, -1),
error.type(3)
),
after_sort_result
)
);
replaceBlankCells =
LAMBDA(
array,
[replacement_value],
MAP(
array,
LAMBDA(
cur_cell,
IF(
ISBLANK(cur_cell),
IF(ISOMITTED(replacement_value), “”, replacement_value),
cur_cell
)
)
)
);
replaceCell =
LAMBDA(
array,
target_cell_value,
replacement_value,
[comparison_operator],
MAP(
array,
LAMBDA(
cur_cell_value,
let(
comparison_func,
IF(
ISOMITTED(comparison_operator),
mask.comparisonFunc(“=”),
mask.comparisonFunc(comparison_operator)
),
comparison_result, comparison_func(cur_cell_value, target_cell_value),
if(
comparison_result,
replacement_value,
target_cell_value
)
)
)
)
);
replaceCols =
LAMBDA(
replacement_cols,
original_array,
[target_col_idx],
[insert_bool_default_false],
[trim_to_orig_size_bool_df_FALSE],
[expand_replacement_cols_to_match_rows_df_TRUE],
[expand_original_cols_to_match_rows_df_TRUE],
LET(
col_idx, IF(ISOMITTED(target_col_idx), 1, target_col_idx),
orig_cols, columns(original_array),
insert_bool,
IF(
ISOMITTED(insert_bool_default_false),
FALSE,
insert_bool_default_false
),
adj_orig_array,
if(
if(
isomitted(expand_original_cols_to_match_rows_df_TRUE),
TRUE,
expand_original_cols_to_match_rows_df_TRUE
),
_ensureHeight(replacement_cols,original_array),
original_array
),
adj_replacement_cols,
if(
if(
isomitted(expand_replacement_cols_to_match_rows_df_TRUE),
TRUE,
expand_replacement_cols_to_match_rows_df_TRUE
),
_ensureHeight(original_array,replacement_cols),
replacement_cols
),
first_part,
IF(
col_idx > 1,
hSTACK(TAKE(adj_orig_array, ,col_idx – 1), adj_replacement_cols),
adj_replacement_cols
),
drop_cols,
if(
orig_cols>=col_idx,
if(
insert_bool,
col_idx-1,
col_idx+columns(adj_replacement_cols)-1
),
0
),
combined_parts,
IF(
or(drop_cols=0,drop_cols>orig_cols),
first_part,
hstack(first_part, drop(adj_orig_array, ,drop_cols))
),
if(
if(
isomitted(trim_to_orig_size_bool_df_FALSE),
FALSE,
trim_to_orig_size_bool_df_FALSE
),
take(combined_parts, ,orig_cols),
combined_parts
)
)
);
replaceRows =
LAMBDA(
replacement_rows,
original_array,
[target_row_idx],
[insert_bool_df_false],
[trim_to_orig_size_bool_df_FALSE],
[expand_replacement_rows_to_match_cols_df_TRUE],
[expand_original_rows_to_match_cols_df_TRUE],
LET(
row_idx, IF(ISOMITTED(target_row_idx), 1, target_row_idx),
orig_rows, rows(original_array),
insert_bool,
IF(
ISOMITTED(insert_bool_df_false),
FALSE,
insert_bool_df_false
),
adj_orig_array,
if(
if(
isomitted(expand_original_rows_to_match_cols_df_TRUE),
TRUE,
expand_original_rows_to_match_cols_df_TRUE
),
_ensureWidth(replacement_rows, original_array),
original_array
),
adj_replacement_rows,
if(
if(
isomitted(expand_replacement_rows_to_match_cols_df_TRUE),
TRUE,
expand_replacement_rows_to_match_cols_df_TRUE
),
_ensureWidth(original_array,replacement_rows),
replacement_rows
),
first_part,
IF(
row_idx > 1,
VSTACK(TAKE(adj_orig_array, row_idx – 1), adj_replacement_rows),
adj_replacement_rows
),
drop_rows,
if(
rows(adj_orig_array)>=row_idx,
if(
insert_bool,
row_idx-1,
row_idx+rows(adj_replacement_rows)-1
),
0
),
combined_parts,
IF(
drop_rows<=0,
first_part,
vstack(first_part, drop(adj_orig_array, drop_rows))
),
result,
if(
if(
isomitted(trim_to_orig_size_bool_df_FALSE),
FALSE,
trim_to_orig_size_bool_df_FALSE
),
take(combined_parts, orig_rows),
combined_parts
),
result
)
);
// Dimension and Size Helpers
_areSameHeight = LAMBDA(array1, array2,
ROWS(array1) = ROWS(array2)
);
_areSameWidth = LAMBDA(array1, array2,
COLUMNS(array1) = COLUMNS(array2)
);
_areSameSize = LAMBDA(array1, array2,
AND(_areSameWidth(array1, array2), _areSameHeight(array1, array2))
);
_ensureHeight =
lambda(
reference_array,
expansion_array,
[fill_value_df_DBLQT],
expand(
expansion_array,
max(rows(reference_array), rows(expansion_array)),,
if(isomitted(fill_value_df_DBLQT), “”,fill_value_df_DBLQT)
)
);
_ensureWidth =
lambda(
reference_array,
expansion_array,
[fill_value_df_DBLQT],
expand(
expansion_array, ,
max(columns(reference_array), columns(expansion_array)),
if(isomitted(fill_value_df_DBLQT), “”,fill_value_df_DBLQT)
)
);
_diffHeight = LAMBDA(array1, array2,
ROWS(array1) – ROWS(array2)
);
_diffWidth = LAMBDA(array1, array2,
COLUMNS(array1) – COLUMNS(array2)
);
_diffSize = LAMBDA(array1, array2,
HSTACK(_diffHeight(array1, array2), _diffWidth(array1, array2))
);
_maxHeight = LAMBDA(arr_1, arr_2,
LET(
arr_1_height, ROWS(arr_1),
arr_2_height, ROWS(arr_2),
max_height, MAX(arr_1_height, arr_2_height),
max_height
)
);
_maxWidth = LAMBDA(arr_1, arr_2,
LET(
arr_1_width, COLUMNS(arr_1),
arr_2_width, COLUMNS(arr_2),
max_width, MAX(arr_1_width, arr_2_width),
max_width
)
);
// Stacking Logic Helpers
_stackSwitch =
lambda(
array_to_stack, fixed_array, stack_placement_df_RIGHT,
switch(
if(isomitted(stack_placement_df_RIGHT),”right”,stack_placement_df_RIGHT),
“above”,
vstack(array_to_stack, fixed_array),
“below”,
vstack(fixed_array, array_to_stack),
“left”,
hstack(array_to_stack, fixed_array),
“right”,
hstack(fixed_array, array_to_stack),
error.type(3)
)
);
_stackAndExpandSwitch =
lambda(
array_to_stack, fixed_array, stack_placement_df_RIGHT, [fill_value_df_DBQT],
switch(
if(isomitted(stack_placement_df_RIGHT),”right”,stack_placement_df_RIGHT),
“above”,
_stackAndExpandWidth(array_to_stack, fixed_array,fill_value_df_DBQT),
“below”,
_stackAndExpandWidth(fixed_array, array_to_stack, fill_value_df_DBQT),
“left”,
_stackAndExpandHeight(array_to_stack, fixed_array,fill_value_df_DBQT),
“right”,
_stackAndExpandHeight(fixed_array, array_to_stack, fill_value_df_DBQT),
error.type(3)
)
);
_stackAndExpandHeight =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_FALSE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_FALSE),
FALSE,
vstack_bool_df_FALSE
),
expanded_array_1, EXPAND(array_1, max_height, , fill_char),
expanded_array_2, EXPAND(array_2, max_height, , fill_char),
stack(expanded_array_1, expanded_array_2, stack_bool)
)
);
_stackAndExpandWidth =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_TRUE),
FALSE,
vstack_bool_df_TRUE
),
expanded_array_1, EXPAND(array_1, , max_width, fill_char),
expanded_array_2, EXPAND(array_2, , max_width, fill_char),
stack(expanded_array_1, expanded_array_2, stack_bool)
)
);
_stackAndExpandAllDimensions =
LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE],
LET(
max_width, _maxWidth(array_1, array_2),
max_height, _maxHeight(array_1, array_2),
fill_char, IF(ISOMITTED(fill_value_df_blank), “”, fill_value_df_blank),
stack_bool,
if(
isomitted(vstack_bool_df_TRUE),
TRUE,
vstack_bool_df_TRUE
),
expanded_array_1, EXPAND(array_1, max_height, max_width, fill_char),
expanded_array_2, EXPAND(array_2, max_height, max_width, fill_char),
if(stack_bool, vstack(expanded_array_1, expanded_array_2), hstack(expanded_array_1, expanded_array_2))
)
);
Read More
Townhall recording for in-org-guests
I set up a Town Hall in Microsoft Teams and invited the members of a Microsoft 365 group. This group also includes guests who do not have a business, school, or university account. All participants can open and join the Town Hall via the link in the email invitation. Once I publish the recording after the Town Hall ends, all invitees receive another email informing them that the recording of the event is available. However, only people with a business, school, or university account can access the recording (VOD) via the link provided. If I alternatively provide a link for people with existing access through my OneDrive in the “Recordings” folder (shared with all members of the above Microsoft 365 group), access to the recording is possible. Is there a way to enable access to the VOD for people who do not have a business, school, or university account?
I set up a Town Hall in Microsoft Teams and invited the members of a Microsoft 365 group. This group also includes guests who do not have a business, school, or university account. All participants can open and join the Town Hall via the link in the email invitation. Once I publish the recording after the Town Hall ends, all invitees receive another email informing them that the recording of the event is available. However, only people with a business, school, or university account can access the recording (VOD) via the link provided. If I alternatively provide a link for people with existing access through my OneDrive in the “Recordings” folder (shared with all members of the above Microsoft 365 group), access to the recording is possible. Is there a way to enable access to the VOD for people who do not have a business, school, or university account? Read More
Microsoft at PASS Data Community Summit 2024!
We’re thrilled to return as a Sapphire Sponsor for this year’s PASS Data Community Summit, together with our partner Intel. With 20+ sessions, breakfast and lunch panels, pre-cons and more, our product experts and engineers will be there and ready to share the latest innovations happening across SQL Server, Azure SQL and go beyond the cloud to cover the latest from Microsoft Fabric and the world of AI.
Join us to “Connect, Share and Learn” alongside the rest of your peers in the PASS community. The official event kicks off Wednesday with the opening keynote from Shireesh Thota, Corporate Vice President, Azure Databases and other key Microsoft data leaders.
Can’t wait until the event?
Get an early preview of the sessions at PASS Summit! Watch this free webinar hosted by Redgate’s Steve Jones and Rie Merritt, Principal PM Manager at Microsoft. Don’t miss this opportunity for a first look into the topics we’ll be covering, register to watch on demand
From SQL Server to Azure SQL and analytics and governance, Microsoft’s experts will bring the latest product developments to help you build the right data platform to solve your business needs. Here’s a preview into some of the sessions, view the complete list here:
Highlights of Microsoft sessions at PASS Data Community Summit 2024:
Breakfast with the Microsoft Data Leadership Team
Bob Ward, Shireesh Thota, Asad Khan, Anna Hoffman
Engage with key leaders over breakfast and discuss the future of Azure data solutions.
Join us for a lunch and learn with Bob Ward as he covers all things SQL and dives into his latest release: Azure SQL Revealed: The Next-Generation Cloud Database with AI and Microsoft Fabric.
Pre-Conference Workshops
Tuesday: The SQL AI Workshop Bob Ward, Muazma Zahid, Davide Mauri
This workshop is your one-stop shop to dive deep into the latest innovations in SQL and AI with hands-on training from our Microsoft engineering team.
General
AI-Assisted SQL Server: Transforming Database Management
Bob Ward, Erin Stellato, Anna Hoffman
Discover how AI is revolutionizing database management and enabling new levels of efficiency and insight.
Unlocking the Power of Data with Microsoft Fabric
Ravs Kaur
Join us for an insightful session on what’s next for Microsoft Fabric, an all-in-one analytics solution.
Unlocking the Power of Azure SQL Database: AI, Elastic Pools, and Beyond
Muazma Zahid, Arvind Shyamsundar, Davide Mauri
Explore the advanced capabilities of Azure SQL, from AI integration to Elastic Pools.
Accelerate Your Modernization Journey with Azure Databases
Dr. Dani Ljepava, Niko Neugebauer, Dhananjay Mahajan
Learn best practices for modernizing your data platform and leveraging the full power of Azure databases.
Seamless Database Management of SQL Server for Hybrid Environments
Nikita Takru, Lance Wright
Find out how to manage your SQL Server environments efficiently with Azure’s hybrid capabilities.
Learning Pathways
Becoming an Azure SQL DBA (multiple sessions)
Dr Dani Ljepava, Erin Stellato, Pam Lahoud, Niko Neugebauer, Bob Ward
Follow a structured pathway to advance your skills and become an Azure SQL DBA.
Theater Sessions
Be sure to visit the Microsoft booth during expo hours and check out the complete list of theater sessions!
–Exclusive Offer for PASS Attendees–
As a special offer from Microsoft, use the code AZURE150 to receive $150 off your 3-day conference pass. Don’t miss this opportunity to connect, grow, and learn with the community. Register today!
Community
Did you know that Microsoft offers organizers of user groups a free meet up license and several other benefits to host your local in-person or virtual meeting? Start your journey here
Microsoft Tech Community – Latest Blogs –Read More
The Marketplace Partner Digest | October 2024
Welcome to the October edition of Partner Release Notes from the Microsoft commercial marketplace! The marketplace is central to how we keep you ahead, helping you reach more customers, simplify sales, and unlock growth. Dive into the latest product insights designed to keep you up to date with everything happening in the marketplace.
NEW! Don’t miss an update! Subscribe to the “Partner Digest” label to get notified whenever a new Marketplace Partner Digest hits the marketplace community blog. Need help configuring your settings? Check out this recent community post for how-to guidance: Managing your marketplace community subscriptions – Microsoft Community Hub.
_______________________________________________________________________________________________________________________________________________
🚀 Noteworthy Highlights
Reduced Agency Fees for Marketplace Renewals
As a partner-focused business platform, we’re evolving our marketplace agency fee structure to provide ongoing value to our partners and customers. Partners will now benefit from a 50% reduced agency fee for renewals sold as private offers through the marketplace. This reduction applies automatically when claiming either an existing marketplace agreement or an off-marketplace sale as a renewal in Partner Center.
The value for you and your customers:
Secure bigger deals and keep more margin while solidifying valuable relationships with your customers.
Customers will continue to get more value for their investments with 100% of eligible purchases counting toward their cloud commitment.
Resources to learn more:
Read the blog on how we’re maximizing partner success with marketplace changes.
Check out the technical documentation for in-depth information.
Partner Center AI Assistant (Preview) Now Available
We introduced an AI-powered assistant in Partner Center to enhance the partner experience. Currently available in English with more languages coming by the end of 2024, the AI assistant delivers tailored insights, intelligent suggestions, and quick answers to your day-to-day questions.
Training Partners Now Featured on AppSource
The Partner Directory on AppSource has been updated to showcase Training Service Partners, boosting visibility for qualified partners and helping customers find expertise to accelerate their adoption of Microsoft and AI cloud technologies. Explore the Partner Directory and learn how to enroll as a Microsoft Training Services Partner.
New Hugging Face Models Available in the Azure AI Model Catalog
The Azure AI Model Catalog, built on the marketplace, now includes 18 new Hugging Face models. This enables partners to access a wider range of AI models to build innovative applications and services to sell through the marketplace. Learn more.
Updated ATO Reporting Requirements for Australian Marketplace Sellers
Attn.: Publishers transacting or eligible to transact in the Australian Microsoft Store and Microsoft commercial marketplace
To comply with the Australian Taxation Office’s (ATO) new Sharing Economy Reporting Regime (SERR), Microsoft is now collecting additional information from publishers transacting in the Australian Microsoft Store and marketplace, including business identifiers, personal details, and bank information. Learn more here.
🌟 Partner Resources
We’ve curated best-in-class resources to help you get the most out of the Microsoft commercial marketplace! Explore these exclusive resources below:
Read the latest marketplace blog for customers: Unlock your data with AI solutions from the Microsoft commercial marketplace
UK Marketplace Summit On-demand Content: Catch up on key takeaways and insights from last month’s UK Marketplace Summit, including how to activate the channel through marketplace. Watch the on-demand content now!
NEW! Marketplace playbook on Microsoft Learn: This comprehensive guide provides best practices and step-by-step instructions to streamline your onboarding and optimize your selling experience. Check it out here.
Upcoming Mastering the Marketplace webinars: Access live and on-demand webinars designed to help you develop transactable offers for the marketplace. Explore the webinars below and mark your calendars! All times are listed in PDT.
October 23rd, 9 AM: Developing your container offer
November 5th, 9 AM: Creating your first offer in Partner Center
November 12th, 9 AM: Creating plans and pricing for your offer
November 13th, 9 AM: Unlocking sales through the marketplace
⏰ Recent & Upcoming Events
The marketplace @ Microsoft Ignite 2024: Microsoft Ignite is happening next month in Chicago from November 19-22, 2024! While in-person tickets are sold out, you can still participate in the digital experience! Discover the latest in marketplace developments and learn how to capture the marketplace and AI opportunity. Don’t miss out on these exclusive marketplace breakout sessions. Register now to secure your access today!
BRK125 – More than a storefront, unlocking value through the marketplace (for customers!): Learn what new capabilities are available in the marketplace to help buy vendor solutions with confidence and drive AI innovation.
BRK343 – How to capture the marketplace opportunity: Get guidance to maximize your marketplace success, including how to sell through Microsoft sales channels.
BRK344 – Activating the channel opportunity through the marketplace: Explore additional ways to monetize with features like multiparty private offers and professional services that boost your marketplace success.
BRK350 – What’s new in ISV Success – AI benefits for software companies and more: Learn about ISV Success’ new AI benefits and cash incentives designed to help you build innovative AI experiences through the marketplace.
And if you’re in Chicago for Ignite, come talk with marketplace experts at the Microsoft AI Cloud Partner Program (MAICPP) area of the Hub space and join us for the following theater sessions (held in person only):
THR679 – 10 tips for marketplace success: Quick tips on how to maximize marketplace success with Microsoft product experts.
THR681 – Step-by-step instructions to activate multiparty private offers: Get hands-on guidance on how to enroll and become eligible to sell multiparty private offers through the marketplace.
👋 Share Your Feedback!
We truly appreciate your feedback and want to ensure these Partner Release Notes deliver the information you need to succeed in the marketplace. If you have any feedback or suggestions on how we can continue to improve the content to best support you, we’d love to hear from you in the comments below!
Thank you!
Microsoft Tech Community – Latest Blogs –Read More
Including hyperlinks when linking cells in different sheets
Hello! So, I have a file with several sheets – the first is a master list of cells linking to external resources, while the other sheets are divided by topic. Each cell in the master list sheet contains the text name of the link (for example “Health Equity”) and a static hyperlink (for example https://www.phaboard.org/wp-content/uploads/Health-Equity.pdf).
What I need to do is link both the cell content AND it’s hyperlink on another sheet. With this example, that would be the sheet containing all the health equity resources. These individual topic sheets contain the external resources as well as other content, and are ultimately saved as PDFs and included as part of an online training course, so the external hyperlinks need to be included.
The goal is to be able to scroll down the master list and click on each cell to check that the link still works – the webpage hasn’t been changed, etc. If a link does need to be edited, I want to be able to edit both the title (the text in the cell) and the hyperlink itself from the master list sheet AND have them both automatically update in the linked cell on another sheet, so that all I have to do is re-save the sheet as a PDF.
I know how to link the text in the cell to a different sheet, using either paste special>paste link or the formula =’sheetname’!$G12, but this does not include the hyperlink. I still have to update that manually. Does anyone know a way for the external hyperlink to be included in the formula (or multiple formulas) so that both are updated simultaneously? Thank you!
Hello! So, I have a file with several sheets – the first is a master list of cells linking to external resources, while the other sheets are divided by topic. Each cell in the master list sheet contains the text name of the link (for example “Health Equity”) and a static hyperlink (for example https://www.phaboard.org/wp-content/uploads/Health-Equity.pdf). What I need to do is link both the cell content AND it’s hyperlink on another sheet. With this example, that would be the sheet containing all the health equity resources. These individual topic sheets contain the external resources as well as other content, and are ultimately saved as PDFs and included as part of an online training course, so the external hyperlinks need to be included. The goal is to be able to scroll down the master list and click on each cell to check that the link still works – the webpage hasn’t been changed, etc. If a link does need to be edited, I want to be able to edit both the title (the text in the cell) and the hyperlink itself from the master list sheet AND have them both automatically update in the linked cell on another sheet, so that all I have to do is re-save the sheet as a PDF. I know how to link the text in the cell to a different sheet, using either paste special>paste link or the formula =’sheetname’!$G12, but this does not include the hyperlink. I still have to update that manually. Does anyone know a way for the external hyperlink to be included in the formula (or multiple formulas) so that both are updated simultaneously? Thank you! Read More
Count if function
Trying to count all cells in a column that are not “Closed” and also excluding the counting of any blank cells. The below formula gives me a count of over 1 million when it should be 178.
=COUNTIF(‘All Defects’!I:I,”<>Closed”)
Trying to count all cells in a column that are not “Closed” and also excluding the counting of any blank cells. The below formula gives me a count of over 1 million when it should be 178. =COUNTIF(‘All Defects’!I:I,”<>Closed”) Read More
New Feature Suggestion – Group Worksheets
Hello,
I would like to suggest a new feature to add to Excel. I would like the ability to group my worksheets in a similar way you can group columns and rows.
My firm has many workbooks that contain (too) many worksheets we have to scroll through to find the one we need. The ability to make groups instead of just color coding would be a significant improvement to our user experience and workflow.
Any thoughts and feedback is appreciated
Hello, I would like to suggest a new feature to add to Excel. I would like the ability to group my worksheets in a similar way you can group columns and rows. My firm has many workbooks that contain (too) many worksheets we have to scroll through to find the one we need. The ability to make groups instead of just color coding would be a significant improvement to our user experience and workflow. Any thoughts and feedback is appreciated Read More
sum is not calculating
my sum function is not calculating my rolls of numbers
my sum function is not calculating my rolls of numbers Read More
Power automate flow to refresh all data in an Excel Online file on SharePoint
Hello, I have an Excel script that will refresh all data connections within the file. I am using Get Data->Excel file to pull in another Excel file I need. I have tested the script and it works perfectly. I need to be able to have this script run in the background of when I am away from my laptop, so I have tried to use Power Automate to run the script in my Excel file every X minutes. My power automate flow is simple: Recurrence->Run Script. I have been testing my Power Automate flow and been getting success messages, but when I open up the Excel file I do not see any updated data on the data table that is supposed to there. I do not think this is a script issue, because in my script I even included a log of time for when the script runs successful, and it will update each time I initiate the Power Automate flow, but the data will still not be refreshed. I do not understand why my flow and script will seem to be run correctly, but the data connections will not be refreshed.
Hello, I have an Excel script that will refresh all data connections within the file. I am using Get Data->Excel file to pull in another Excel file I need. I have tested the script and it works perfectly. I need to be able to have this script run in the background of when I am away from my laptop, so I have tried to use Power Automate to run the script in my Excel file every X minutes. My power automate flow is simple: Recurrence->Run Script. I have been testing my Power Automate flow and been getting success messages, but when I open up the Excel file I do not see any updated data on the data table that is supposed to there. I do not think this is a script issue, because in my script I even included a log of time for when the script runs successful, and it will update each time I initiate the Power Automate flow, but the data will still not be refreshed. I do not understand why my flow and script will seem to be run correctly, but the data connections will not be refreshed. Read More
Intune deployed extensions for Edge browser need to be enabled and logged into on browser restart
I have successfully deployed about a dozen Edge extensions using Intune. I have blocked all extensions and set my 12 as the only allowed ones. They install fine and work as expected until there is an update for Edge.
Whenever Edge gets restarted, on open the extensions are all ‘hidden’ from toolbar. A couple of the extensions require logging in and I have to do this each time Edge is restarted. Four of them open tabs as though they’ve been installed for the first time.
Prior to managing these with Intune they did not behave this way despite the same ones all being installed “manually”. If I was logged into an extension and restarted Edge, I was still logged in on open.
It’s almost as if they are being installed each time Edge is initially launched.
Are there any suggestions of where to look or what I could do to curtail this behavior? It is delaying me rolling the extensions policy out to all of my users. They’d hang me if they had to deal with this on every Edge update or restart.
Please let me know if there is additional information that would be helpful in diagnosing the issue.
I have successfully deployed about a dozen Edge extensions using Intune. I have blocked all extensions and set my 12 as the only allowed ones. They install fine and work as expected until there is an update for Edge. Whenever Edge gets restarted, on open the extensions are all ‘hidden’ from toolbar. A couple of the extensions require logging in and I have to do this each time Edge is restarted. Four of them open tabs as though they’ve been installed for the first time. Prior to managing these with Intune they did not behave this way despite the same ones all being installed “manually”. If I was logged into an extension and restarted Edge, I was still logged in on open.It’s almost as if they are being installed each time Edge is initially launched. Are there any suggestions of where to look or what I could do to curtail this behavior? It is delaying me rolling the extensions policy out to all of my users. They’d hang me if they had to deal with this on every Edge update or restart. Please let me know if there is additional information that would be helpful in diagnosing the issue. Read More
Queries on Log Analytics
Hi All,
I was hoping someone would be able to assist me, or at least point me in the right direction.
We have a number of Log Analytics Workspaces generating obvious costs every month.
What I would like to do is the following via script (PowerShell or KQL for instance):
1. I would like to pull the costs for the Log Analytics Workspaces and be able to show them all on a single graph (exporting to *.csv would allow me to do so).
2. I would like to pull a report on a LAW and align the costs for that workspace with the resources using it. (the current options with Cost Management + Billing only shows log ingestion).
Has anyone been able to do something similar?
Thanks!
CV
Hi All, I was hoping someone would be able to assist me, or at least point me in the right direction.We have a number of Log Analytics Workspaces generating obvious costs every month.What I would like to do is the following via script (PowerShell or KQL for instance): 1. I would like to pull the costs for the Log Analytics Workspaces and be able to show them all on a single graph (exporting to *.csv would allow me to do so).2. I would like to pull a report on a LAW and align the costs for that workspace with the resources using it. (the current options with Cost Management + Billing only shows log ingestion). Has anyone been able to do something similar?Thanks!CV Read More
Upcoming design updates: Microsoft Purview Message Encryption Portal
The Microsoft Purview Message Encryption portal will undergo minor design updates to align with Purview branding. Microsoft will be updating fonts, colors, controls, and more to align with Purview branding. These changes are designed to enhance the user experience without causing any disruptions. Microsoft will begin rolling out changes mid-October 2024 and expects to complete by mid-December 2024.
Users will see minor design changes within the user interface (UI) – fonts, colors, controls, and more are updated to align with Purview branding.
See below for side-by-side comparisons of Login, One-time passcode, View attachment, and reply UI in the portal.
Login with branding customization (before)
Login with branding customization (after)
One time passcode (before)
One time passcode (after)
View mail (before)
View mail (after)
View attachment (before)
View attachment (after)
‘Reply’ editor has also been updated:
Removed the bottom button bar.
Moved the emoji button to the formatting bar.
Relocated the toggle formatting option button to the top bar.
Updated the insert table control to allow size entry.
Reply mail (before)
Reply mail (after)
If your organization has captured screenshots of the portal in your documentation, you may consider updating it as appropriate.
Microsoft Tech Community – Latest Blogs –Read More
Onedrive App in Teams not show files
After we change we did a tenant rename on October 18 2024 in which we change sharepoint url we have an issue with the Onedrive app in teams. It doesn’t load any files. The Onedrive client in Windows works without any issues. Opening teams in the webclient gives us the same issue. Resetting the teams client doesn’t help.
Anyone any ideas?
Below the error we see.
After we change we did a tenant rename on October 18 2024 in which we change sharepoint url we have an issue with the Onedrive app in teams. It doesn’t load any files. The Onedrive client in Windows works without any issues. Opening teams in the webclient gives us the same issue. Resetting the teams client doesn’t help.Anyone any ideas?Below the error we see. Read More
ICYMI: IAMCP Profiles in Partnership Podcast episode 2 and 3 are now available!
Be sure to subscribe to our IAMCP label in the Partner news blog so you are notified of all new episodes!
Be sure to subscribe to our IAMCP label in the Partner news blog so you are notified of all new episodes!
IAMCP Profiles in Partnership Podcast Ep 2 | Transforming Business with Data, AI, and Partnerships – Microsoft Community Hub
IAMCP Profiles in Partnership Podcast Ep 3 | Personal Connections: The Key to Digital Partnerships – Microsoft Community Hub
Read More
AD DS Users in Remote Desktop Users group receive not authorized for remote login
Hello, thanks for checking!
My AD DS config was lost.
I have now built a new PDC for AD DS. I have recreated users and given them remote permissions via remote tab on user details, I have added them to administrators group, and I have added them to Remote Desktop Users group. I have joined “PC1” I can confirm the user can login via console, but when attempting to remote in, is receiving “The connection was denied because the user account is not authorised for remote login.” The only user that can use RDP at this time is domain ‘administrator’.
It was working previously.
I have verified that the PC1 has remote desktop enabled, and can connect via domain ‘administrator’
I would appreciate any insight into this matter!
Hello, thanks for checking!My AD DS config was lost. I have now built a new PDC for AD DS. I have recreated users and given them remote permissions via remote tab on user details, I have added them to administrators group, and I have added them to Remote Desktop Users group. I have joined “PC1” I can confirm the user can login via console, but when attempting to remote in, is receiving “The connection was denied because the user account is not authorised for remote login.” The only user that can use RDP at this time is domain ‘administrator’. It was working previously.I have verified that the PC1 has remote desktop enabled, and can connect via domain ‘administrator’I would appreciate any insight into this matter! Read More
Partner Case Study Series | Detego RFID Technology
For accuracy and increased sales, Detego’s RFID technology is a cut above
Legendary fashion designer Diane von Fürstenberg once said, “Style is something each of us already has, all we need to do is find it.” Finding your style can be difficult enough—endlessly scrutinizing the racks of umpteen clothing shops—without the added pitfall of unearthing the perfect pair of boots only to be told they don’t have your size. An inaccurate stock count is not only disheartening to customers; it has the potential to injure a store’s reputation and future sales. With the added pressure of reduced foot traffic due to the COVID-19 pandemic, stores have an increased drive to accommodate shoppers crossing their brick-and-mortar thresholds.
For one prominent luxury clothing retailer in the heart of London, staying on the forefront of high fashion wasn’t an issue. However, to maintain their standing as one of High Street’s most high-quality brands, enticing and satisfying the style-conscious in equal amounts, they would need to upgrade the way they handled inventory. And like the perfect pair of boots, they discovered their answer with a radio-frequency identification (RFID) inventory management system through global software company Detego.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More