Category: News
OneDrive search results missing Tiles view
Some of my users have a new interface for web OneDrive search results (for folders shared to them) that no longer offers the Tiles view. The Tiles view is critical for our team to be able to visually search many images. It looks like the Tiles view is being removed? We are sharing large amounts of images that can not be synched, so we must be able to search OneDrive on the web, and see the results visually. It is also now happening for me, even when searching my own OneDrive. There used to be an option on the rights side above the list. It defaulted to “List” view but could be changed to “Tiles.” Now it’s gone!
This is what it used to have:
How do I get this function back?
Some of my users have a new interface for web OneDrive search results (for folders shared to them) that no longer offers the Tiles view. The Tiles view is critical for our team to be able to visually search many images. It looks like the Tiles view is being removed? We are sharing large amounts of images that can not be synched, so we must be able to search OneDrive on the web, and see the results visually. It is also now happening for me, even when searching my own OneDrive. There used to be an option on the rights side above the list. It defaulted to “List” view but could be changed to “Tiles.” Now it’s gone! This is what it used to have:How do I get this function back? Read More
Native JSON support now in preview in Azure SQL Managed Instance
Processing JSON data in Azure SQL Managed Instance just got more performant thanks to the new way JSON data is stored and handled. Now in preview for Azure SQL Managed Instance with Always-up-to-date update policy configured, JSON data can be stored in a new binary data format with database column declared as a new JSON data type:
CREATE TABLE Orders (order_id int, order_details JSON NOT NULL);
All existing JSON functions support the new JSON data type seamlessly, with no code changes. There are also a couple of new aggregate functions:
1. Constructing a JSON object from an aggregation of SQL data or columns:
SELECT JSON_OBJECTAGG( c1:c2 )
FROM (
VALUES(‘key1’, ‘c’), (‘key2’, ‘b’), (‘key3′,’a’)
) AS t(c1, c2);
2. Constructing a JSON array from an aggregation of SQL data or columns:
SELECT TOP(5) c.object_id, JSON_ARRAYAGG(c.name ORDER BY c.column_id) AS column_list
FROM sys.columns AS c
GROUP BY c.object_id;
For a quick introduction you can watch a short video explaining the very same functionality on Azure SQL Database:
Resources:
JSON data type (preview) – SQL Server | Microsoft Learn
JSON_OBJECTAGG (Transact-SQL) – SQL Server | Microsoft Learn
JSON_ARRAYAGG (Transact-SQL) – SQL Server | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Ignite: Don’t wait for the future—invent it
From November 18-22, 2024, join thousands of other curious and inspired minds at Microsoft Ignite to learn what’s possible with AI. Explore tools, build skills, and form partnerships to grow your business and reach more customers—safely, securely, and responsibly.
In addition to a first look at the latest AI technology, you’ll have built-in time to connect with peers and industry experts, network with Microsoft leaders, explore co-sell opportunities, and get all the details on how the Microsoft AI Cloud Partner Program helps you innovative and grow with updated benefits and offerings.
Whether you’re aiming to build a technical roadmap, a brain trust of collaborators and innovators, or a plan for growing your business with Microsoft, you can do it at Ignite. Your business, our shared customers, and organizations across the world are counting on it. Register today. Spots are limited for in-person attendees, and we don’t want you to miss this.
Register for Ignite
Microsoft Tech Community – Latest Blogs –Read More
Things I learned as a member of the Microsoft Community
For over 15 years at Microsoft my passion focused on amplifying the voices of our product users. No matter if it was via our EAP (early adopter programs), MVPs, Tech Community members, STEP, MODE, IT, it did not matter because their voices equally and importantly reflected every user. I am still part of the Microsoft Community. I speak onsite at events, participate in online conferences, and post to different tech communities – like this one. That may seem like a lot of time to dedicate to helping others so let me tell you what you get in return. Here are four important things I have learned along the way:
“Together, Apes Strong”- Rise of the Planet of the Apes
aka The Power of the Community
Before joining Microsoft, I was an MVP. I got to see firsthand what happens when people are enthusiastic about a product or service they use every day.
[Left] 2007 | Vista launch event, my first event as an MVP. With MVP Mark Rosenberg, Daniel Egan, and Lynn Langit (Microsoft Evangelist). [Right] 2008 | My first MVP Summit with Steve Balmer answering MVP questions.
Communities have the power to bring people together from all corners of the world, creating a sense of belonging and support. These communities allow individuals to share their experiences, knowledge, and passions with others who have similar interests. This exchange of information allows its members to gain new perspectives and insights from diverse viewpoints.
Additionally, online communities provide a platform for individuals to find support and encouragement, whether they are facing personal challenges or pursuing their goals. Finally, when enough people start to discuss a problem about a product or service, together, their voices are heard, and change happens. Be a change agent within the change agency – the best community in tech.
“Danger Will Robinson, Danger!”- Lost in Space
aka Learning from the Others’ Mistakes
Since starting my own company – StephenLRose.com, I now work with clients across a variety of verticals from finance and pharmaceutical to manufacturing and services. What I have learned: No matter how well versed you are with any product, individuals and organizations have some “unique” way of using a product in a way that no one else has before. One instance was when a company told there users to store all their documents in the Windows sub-folder because it was more secure than in My Documents (what became OneDrive) since hackers “wouldn’t look there.” Sigh.
[Left] 2011 | Indianapolis, IN – Community exchange sessions from Get On The Bus tour. [Right] 2017 | Interviewing members of the OneDrive engineering team about timelines and top community feature requests.
I shared this story with the community to see if there was any president and truth to this but more so, to understand the dangers of doing this. I got a ton of great answers back and provided a more well-balanced answer than I could by myself. To be honest, I was initially gob smacked by this “security practice.” but with insights from the community, I was able to create a coherent “guidance-based” response to them.
I am sure that so many of you have stories, logical and illogical to share with the community that would help them either avoid the mistakes made by others and more so, how to avoid it. I highly encourage sharing your stories. Feel free to at-mention me, or if it’s a juicy, tricky one, “at-mention everyone” (aka, the community) to help you work through it.
“Darmok, whose arms were wide”- Star Trek Next Generation (S5 Ep.2)
aka The Value of Sharing Your Knowledge
I am old. I remember using 56k x2 modems, PCMCIA slots, IOMEGA disks, Cheetah Fastback and using DC PROMO in the Terminal to promote a Server from BDC (Backup Domain Controller) to PDC (Primary Domain Controller). But with that comes years of experience and understanding. I can comfortably contribute to a chat with everyone from the CEO to the Backup to the Assistant Administrator because I have done all their jobs. Just like in a restaurant, the manager who has worked at every job and every station in the restaurant gets respect because they have been there.
[Top left] A group of pros sharing a moment at the Springboard Party in 2014. [Top right] During our Get On The Bus South America Tour in 2012 – got to connect with amazing community members in Brazil, Argentina, Peru and Chile. [Bottom left > right] Well known Microsoft employees and community supporters, Rick Claus (left) and DJ Joey Snow (right) – at the Springboard Party in New Orleans.
Sharing your knowledge is so important to build a strong community. That can be through writing a blog, doing a podcast, doing a talk at a local user group, a larger M365 or TechCon365 Conference or in the Tech Community support forums. You are the ones whose jobs depend on knowing these software packages inside and out. This is your chance to help others become successful and in return, be there for you when you need help with a new product because, as Ferris Buller once said, “Life moves pretty fast.”
“By Grabthar’s hammer, by the suns of Worvan, you shall be avenged!”- Galaxy Quest
aka Community has your back
At conferences I like to create and attend after-hours meet-ups with names like Copilot Lessons Learned, Adoption: Share Your Tales or Terror, or my personal favorite, Have A Cigar/Share Your Frustration Meet-Up. Find opportunities to share; the community loves when you share your story.
[Left] 2010 | In Chicago with Michael Bender and Marco Russinovitch. [Right] After a long day at TechEd EMEA in Berlin sharing stories with Steve Campbell, Michael Niehaus, Jeremy Chapman, Melissa Batham, Liberty Munson, and me.
Its evenings like this I am reassured that the community is alive for the right reasons. When one of us shares their secret sauce, it encourages others to do the same. I can’t tell you how many times people have been surprised when they have asked me to share something on social for them. Maybe it’s asking if I can speak to their 50-person user group on the East coast via Teams or be a guest on their podcast. I love doing this and doing this in return for others. Why, because we have each other’s back. If a troll comes after one of us, we will respond in vast numbers and push them under the bridge from whence they came! If you have a session at a conference, we will help fill those seats and share decks we have done in the past to help accelerate and reaffirm.
“And may the force be with you, always”- Star Wars – A New Hope
The day I left Microsoft after 15 years was a hard one for me. After posting my thoughts and thank you’s on LinkedIn, I was amazed at how many people thanked me for helping them on their journey to success through my talks, one to one time with them, my blogs or webcasts. It really helped during a tough time. Then it was those same folks reach out to help me to connect and even offer me projects. Community had my back after all those years of being there for them. And I am eternally grateful.
Like Luke and Han getting a medal, it was a moment that made me reflect on my journey. The lifelong friendships I have made, community members we’ve lost that we think of everyday and the feeling that I got this because so many people have got me.
Thank you, all my friends, and may the force be with you, always.
— Stephen Rose
About Stephen
Stephen has been helping with companies all over the world to plan, pilot, deploy, manage, secure, and adopt products including Microsoft 365, Teams, and Copilot as well as a variety of AI tools and 3rd party products.
Stephen was a business owner for many years and an MCT and MVP before he became part of Microsoft in 2009. While working there, he oversaw IT pro training and content for Windows, OneDrive, Office, Teams, and Copilot until he left in 2023.
Currently he is consulting with a variety of customers, helping them manage change and new work methods by showing companies how to use the tools they have today more effectively and get ready for the AI tools they will need to stay ahead.
Check out all the great videos featuring members of our community on his website at StephenLRose.com/videos. Here’s a sample from Stephen’s show, UnplugIT, “Unlocking AI’s Potential in SharePoint: A Conversation with Richard Harbridge (CTO at 2toLead)”:
Visit StephenLRose.com to learn more:
• Find him on X: @StephenLRose
• LinkedIn: linkedin/in/StephenLRose
Microsoft Tech Community – Latest Blogs –Read More
Building Bronze Layer of Medallion Architecture in Fabric Lakehouse using WAL2JSON
Introduction
If you work in data engineering, you may have encountered the term “Medallion Architecture.” This design pattern organizes data within a Lakehouse into distinct layers to facilitate efficient processing and analysis. Read more about it here. This is also a recommended design approach for Microsoft Fabric. To make a Lakehouse usable, data must pass through several layers: Bronze, Silver, and Gold. Each layer focus on progressively enhancing data cleanliness and quality. In this article, we will specifically explore how to build the bronze layer using real-time data streaming from existing PostgreSQL databases. This approach enables real-time analytics and supports AI applications by providing a real time, raw, and unprocessed data.
Image source – https://www.databricks.com/glossary/medallion-architecture
What is Bronze Layer?
This layer is often referred to as the Raw Zone, where data is stored in its original format and structure. According to the common definition, the data in this layer is typically append-only and immutable, but this can be misunderstood. While the intention is to preserve the original data as it was ingested, this does not mean that there will be no deletions or updates. Instead, if deletions or updates occur, the original values are preserved as older versions. This approach ensures that historical data remains accessible and unaltered. Delta Lake is commonly used to manage this data, as they support versioning and maintain a complete history of changes
PostgreSQL as the source for Bronze Layer
Imagine you have multiple PostgreSQL databases running different applications and you want to integrate their data into a Delta Lake. You have a couple of options to achieve this. The first approach involves creating a Copy activity that extracts data from individual tables and stores it in Delta tables. However, this method requires a watermark column to track changes or necessitates full data reloads each time, which can be inefficient.
The second approach involves setting up Change Data Capture in PostgreSQL to capture and stream data changes continuously. This method allows for real-time data synchronization and efficient updates to OneLake. In this blog, we will explore a proof of concept for implementing this CDC-based approach.
How to utilize PostgreSQL logical decoding, Wal2json and Fabric Delta Lake to create a continuously replicating bronze layer?
We will be utilizing PostgreSQL logical replication, Wal2Json plugin and PySpark to capture and apply the changes to delta lake. In PostgreSQL, logical replication is a method used to replicate data changes from one PostgreSQL instance to another or to a different system. Wal2json is a PostgreSQL output plugin for logical replication that converts Write-Ahead Log (WAL) changes into JSON format.
Setup on Azure PostgreSQL
Change following server parameters by logging into Azure portal and navigating to “Server parameters” of the PostgreSQL service.
Parameter Name
Value
wal_level
logical
max_replication_slots
>0 (e.g. 4 or 8 )
max_wal_senders
>0 (e.g. 4 or 8 )
Create publication for all the tables. Publication is a feature in logical replication that allows you to define which tables’ changes should be streamed to subscribers.CREATE PUBLICATION cdc_publication FOR ALL TABLES;
create a replication slot with wal2json as plugin name. A slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database. Note – Wal2json plugin is pre-installed in Azure PostgreSQLSELECT * FROM pg_create_logical_replication_slot(‘cdc_slot’, ‘wal2json’);
You can test if the replication is running by updating some test data and running following command.SELECT * FROM pg_logical_slot_get_changes(‘cdc_slot’, NULL, NULL,’include-xids’, ‘true’, ‘include-timestamp’, ‘true’)
Now that you have tested the replication, let’s look at the output format. Following are the key components of wal2jobs output followed by an example.
Attribute
Value
xid
The transaction ID.
timestamp
The timestamp when the transaction was committed.
kind
Type of operation (insert, update, delete).
schema
The schema of the table.
table
The name of the table where the change occurred.
columnnames
An array of column names affected by the change.
columntypes
An array of column data types corresponding to columnnames.
columnvalues
An array of new values for the columns (present for insert and update operations).
oldkeys
An object containing the primary key or unique key values before the change (present for update and delete operations).
For INSERT statement
{
“xid”: 8362757,
“timestamp”: “2024-08-01 15:09:34.086064+05:30”,
“change”: [
{
“kind”: “insert”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“columnnames”: [
“EMPLOYEE_ID”,
“FIRST_NAME”,
“LAST_NAME”,
“EMAIL”,
“PHONE_NUMBER”,
“HIRE_DATE”,
“JOB_ID”,
“SALARY”,
“COMMISSION_PCT”,
“MANAGER_ID”,
“DEPARTMENT_ID”
],
“columntypes”: [
“numeric(10,0)”,
“text”,
“text”,
“text”,
“text”,
“timestamp without time zone”,
“text”,
“numeric(8,2)”,
“numeric(2,2)”,
“numeric(6,0)”,
“numeric(4,0)”
],
“columnvalues”: [
327,
“3275FIRST NAME111”,
“3275LAST NAME”,
“3275EMAIL3275EMAIL”,
“3275”,
“2024-07-31 00:00:00”,
“IT_PROG”,
32750,
0,
100,
60
]
}
]
}
For UPDATE statement
{
“xid”: 8362759,
“timestamp”: “2024-08-01 15:09:37.228446+05:30”,
“change”: [
{
“kind”: “update”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“columnnames”: [
“EMPLOYEE_ID”,
“FIRST_NAME”,
“LAST_NAME”,
“EMAIL”,
“PHONE_NUMBER”,
“HIRE_DATE”,
“JOB_ID”,
“SALARY”,
“COMMISSION_PCT”,
“MANAGER_ID”,
“DEPARTMENT_ID”
],
“columntypes”: [
“numeric(10,0)”,
“text”,
“text”,
“text”,
“text”,
“timestamp without time zone”,
“text”,
“numeric(8,2)”,
“numeric(2,2)”,
“numeric(6,0)”,
“numeric(4,0)”
],
“columnvalues”: [
100,
“Third1111”,
“BLOB”,
“SKING”,
“515.123.4567”,
“2024-08-01 00:00:00”,
“AD_PRES”,
24000,
null,
null,
90
],
“oldkeys”: {
“keynames”: [
“EMPLOYEE_ID”
],
“keytypes”: [
“numeric(10,0)”
],
“keyvalues”: [
100
]
}
}
]
}
For DELETE statement
{
“xid”: 8362756,
“timestamp”: “2024-08-01 15:09:29.552539+05:30”,
“change”: [
{
“kind”: “delete”,
“schema”: “public”,
“table”: “employees_synapse_test”,
“oldkeys”: {
“keynames”: [
“EMPLOYEE_ID”
],
“keytypes”: [
“numeric(10,0)”
],
“keyvalues”: [
327
]
}
}
]
}
Create OneLake in Fabric. For detailed instruction see this.
Create a delta table with initial load of the data using Spark.
# PostgreSQL connection details
jdbc_url = “jdbc:postgresql://your_postgres_db.postgres.database.azure.com:5432/postgres”
jdbc_properties = {
“user”: “postgres”,
“driver”: “org.postgresql.Driver”
}
# Read data from PostgreSQL employees table
employee_df = spark.read.jdbc(url=jdbc_url, table=”employees”, properties=jdbc_properties)
# Define the path for the Delta table in ADLS
delta_table_path = “abfss://your_container@your_storage_account.dfs.core.windows.net/delta/employees”
# Write DataFrame to Delta table
employee_df.write.format(“delta”).mode(“overwrite”).save(delta_table_path)
delta_df = spark.read.format(“delta”).load(delta_table_path)
delta_df.show()
Now running the following code continuously will keep the data in delta lake in sync with the primary PostgreSQL database.import json
from pyspark.sql import SparkSession
from pyspark.sql.functions import lit
from delta.tables import DeltaTable
import pandas as pd
# PostgreSQL connection details
jdbc_url = “jdbc:postgresql://your_postgres_db.postgres.database.azure.com:5432/postgres”
jdbc_properties = {
“user”: “postgres”,
“driver”: “org.postgresql.Driver”
}
#Delta table details
delta_table_path = “abfss://your_container@your_storage_account.dfs.core.windows.net/delta/employees”
delta_table = DeltaTable.forPath(spark, delta_table_path)
delta_df = spark.read.format(“delta”).load(delta_table_path)
schema = delta_df.schema
loop
cdc_df = spark.read.jdbc(url=jdbc_url, table=”(SELECT data FROM pg_logical_slot_get_changes(‘cdc_slot’, NULL, NULL, ‘include-xids’, ‘true’, ‘include-timestamp’, ‘true’)) as cdc”, properties=jdbc_properties)
cdc_array = cdc_df.collect()
for i in cdc_array:
print(i)
changedData = json.loads(i[‘data’])[‘change’][0]
print(changedData)
schema = changedData[‘schema’]
table = changedData[‘table’]
DMLtype = changedData[‘kind’]
if DMLtype == “insert” or DMLtype == “update”:
column_names = changedData[‘columnnames’]
column_values = changedData[‘columnvalues’]
source_data = {col: [val] for col, val in zip(column_names, column_values)}
print(source_data)
change_df = spark.createDataFrame(pd.DataFrame(source_data))
if DMLtype == “insert”:
change_df.write.format(“delta”).mode(“append”).save(delta_table_path)
if DMLtype == “update”:
old_keys = changedData[‘oldkeys’]
condition = ” AND “.join(
[f”target.{key} = source.{key}” for key in old_keys[‘keynames’]]
)
print(condition)
delta_table.alias(“target”).merge(
change_df.alias(“source”),
condition
).whenMatchedUpdateAll().whenNotMatchedInsertAll().execute()
if DMLtype == “delete”:
condition = ” AND “.join([
f”{key} = ‘{value}'”
for key, value in zip(changedData[“oldkeys”][“keynames”], changedData[“oldkeys”][“keyvalues”])
])
delta_table.delete(condition)
end loop
Conclusion
In conclusion, building the Bronze layer of the Medallion Architecture using wal2json from PostgreSQL as the source to Fabric OneLake provides a robust and scalable approach for handling raw data ingestion. This setup leverages PostgreSQL’s logical replication capabilities to capture and stream changes in real-time, ensuring that the data lake remains up-to-date with the latest transactional data.
Implementing this architecture ensures that the foundational layer is well-structured and becomes a solid layer for next layers while also supporting real-time analytics, advanced data processing and AI applications.
By adopting this strategy, organizations can achieve greater data consistency, reduce latency in data processing, and enhance the overall efficiency of their data pipelines.
References
https://learn.microsoft.com/en-us/fabric/onelake/onelake-medallion-lakehouse-architecture
https://learn.microsoft.com/en-us/azure/databricks/lakehouse/medallion
https://blog.fabric.microsoft.com/en-us/blog/eventhouse-onelake-availability-is-now-generally-available?ft=All
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-and-delta-tables
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please send an email to Database Platform Engineering Team.
Microsoft Tech Community – Latest Blogs –Read More
Partner Case Study Series | Siemens
Combining the real and the digital world
To understand the physical world, it can help to abstract it, and view it through a digital lens. Siemens AG focuses on technological solutions that help its customers identify and solve the big challenges across multiple industries. From infrastructure, to transportation, to healthcare, Siemens empowers its customers to transform their markets, as well as the everyday lives of billions of people. In fact, Siemens Head of Product Management for Product Lifecycle Management software, Ales Alajbegovic says, “We are providing industrial software for design and manufacturing. There is pretty much no company in the world that doesn’t use our software when it comes to these areas.”
And according to Siemens’ Global Alliance Leader for Microsoft, John Butler, those software use cases are ever expanding. “That’s everything from working with our customers to reduce drag on an automobile or an airplane to improving manufacturing efficiency or helping design the newest product. At the end of the day, what we’re trying to do is figure out how to expedite that manufacturing process and that development process to get products to market faster for our customers.”
Full visibility, from start to finish
There’s increasing pressure on businesses to review every phase of the product lifecycle for cost savings, schedule reductions, and other risk factors. Too often, problems come up on the manufacturing floor that are never addressed, causing a cascade effect on productivity across the line. To address these industry issues, you need a remarkable solution from an organization with a tenure to match.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Generating codim-1 bifurcation diagram of sliding friction oscillator
I am performing continuation analysis in MATCONT and I am finding it difficult to generate co-dim bifurcation diagram.
Here is my linearized system of equations which represents a 3-dof clutch dynamics with sliding type friction:
x1′ = x2
x2′ = -(de+Fn*mukin)*x2/je + (Fn*mukin*x4)/je – ((x2-x4)*mukin + mus)/je
x3′ = x4
x4′ = (Fn*mukin*x2)/jd – (ds+Fn*mukin)*x4/jd +ds*x6/jd + ks*x7/jd
x5′ = x6
x6′ = ds*x4/jv -ds*x6/jv – ks*x7/jv
x7′ = x6 – x4
This system dynamics represents clutch dynamics with the sliding friction between inertias je and jd. jv is the load inertia. ks is stiffness between jd and jv. de,ds is damping coefficient. Fn is normal force acting on jd and je during engagement of clutch. mukin is slope of friction curve usually between [-0.01 to 0.01] and mus is static coefficient of friction 0.136 for this analysis.
je = 1.57
jd = 0.0066
jv = 4
ks = 56665.5
ds,de = 0.1
Fn = 8000
Please guide me how to perform the continuation of this system in MATCONT.I am performing continuation analysis in MATCONT and I am finding it difficult to generate co-dim bifurcation diagram.
Here is my linearized system of equations which represents a 3-dof clutch dynamics with sliding type friction:
x1′ = x2
x2′ = -(de+Fn*mukin)*x2/je + (Fn*mukin*x4)/je – ((x2-x4)*mukin + mus)/je
x3′ = x4
x4′ = (Fn*mukin*x2)/jd – (ds+Fn*mukin)*x4/jd +ds*x6/jd + ks*x7/jd
x5′ = x6
x6′ = ds*x4/jv -ds*x6/jv – ks*x7/jv
x7′ = x6 – x4
This system dynamics represents clutch dynamics with the sliding friction between inertias je and jd. jv is the load inertia. ks is stiffness between jd and jv. de,ds is damping coefficient. Fn is normal force acting on jd and je during engagement of clutch. mukin is slope of friction curve usually between [-0.01 to 0.01] and mus is static coefficient of friction 0.136 for this analysis.
je = 1.57
jd = 0.0066
jv = 4
ks = 56665.5
ds,de = 0.1
Fn = 8000
Please guide me how to perform the continuation of this system in MATCONT. I am performing continuation analysis in MATCONT and I am finding it difficult to generate co-dim bifurcation diagram.
Here is my linearized system of equations which represents a 3-dof clutch dynamics with sliding type friction:
x1′ = x2
x2′ = -(de+Fn*mukin)*x2/je + (Fn*mukin*x4)/je – ((x2-x4)*mukin + mus)/je
x3′ = x4
x4′ = (Fn*mukin*x2)/jd – (ds+Fn*mukin)*x4/jd +ds*x6/jd + ks*x7/jd
x5′ = x6
x6′ = ds*x4/jv -ds*x6/jv – ks*x7/jv
x7′ = x6 – x4
This system dynamics represents clutch dynamics with the sliding friction between inertias je and jd. jv is the load inertia. ks is stiffness between jd and jv. de,ds is damping coefficient. Fn is normal force acting on jd and je during engagement of clutch. mukin is slope of friction curve usually between [-0.01 to 0.01] and mus is static coefficient of friction 0.136 for this analysis.
je = 1.57
jd = 0.0066
jv = 4
ks = 56665.5
ds,de = 0.1
Fn = 8000
Please guide me how to perform the continuation of this system in MATCONT. matcont MATLAB Answers — New Questions
How to launch scripts into Simulink using key shortcut
I have some scripts to automate some procese designing in Simulink, but I would like to run/launch them with a key shortcuts.
I tried to use ‘accelerator’ but is shows in the menu accelerator keys but without work.
How to activate accelerators or run/launch a custom scripts with a key shortcuts ?I have some scripts to automate some procese designing in Simulink, but I would like to run/launch them with a key shortcuts.
I tried to use ‘accelerator’ but is shows in the menu accelerator keys but without work.
How to activate accelerators or run/launch a custom scripts with a key shortcuts ? I have some scripts to automate some procese designing in Simulink, but I would like to run/launch them with a key shortcuts.
I tried to use ‘accelerator’ but is shows in the menu accelerator keys but without work.
How to activate accelerators or run/launch a custom scripts with a key shortcuts ? accelerators, key shortcuts MATLAB Answers — New Questions
Using array input and output for simulink S-function C code block
I’m having a tough time attempting to use a C code block that needs uint8 array input and output . I’ve tried starting from a number of legacy code examples but encounter crashes every time my C code actually tries to access the incoming memory. If someone can point me to a working example (I’ve already tried everything that looked relevant here) I’d appreciate it. In my latest attempt I gave up on using the legacy code functions and took code from this example Create a Basic C MEX S-Function – MATLAB & Simulink (mathworks.com) which at least runs without crashing, and started to change it to take uint8 in/out as below. However I’ve no idea where to find the equivalent of InputRealPtrsType for unit8_T . If I use InputPtrsType instead of InputRealPtrsType I get the error error C2100: you cannot dereference an operand of type const void`
static void mdlOutputs(SimStruct *S, int_T tid)
{
int_T i;
// replced ssGetInputPortRealSignalPtrs(S,0) with ssGetInputPortSignalPtrs
InputRealPtrsType uPtrs = ssGetInputPortSignalPtrs(S,0);
// replaced ssGetOutputPortRealSignal(S,0) with ssGetOutputPortSignal
real_T *y = ssGetOutputPortSignal(S,0);
int_T width = ssGetOutputPortWidth(S,0);
for (i=0; i<width; i++) {
*y++ = 2.0 *(*uPtrs[i]);
}
}I’m having a tough time attempting to use a C code block that needs uint8 array input and output . I’ve tried starting from a number of legacy code examples but encounter crashes every time my C code actually tries to access the incoming memory. If someone can point me to a working example (I’ve already tried everything that looked relevant here) I’d appreciate it. In my latest attempt I gave up on using the legacy code functions and took code from this example Create a Basic C MEX S-Function – MATLAB & Simulink (mathworks.com) which at least runs without crashing, and started to change it to take uint8 in/out as below. However I’ve no idea where to find the equivalent of InputRealPtrsType for unit8_T . If I use InputPtrsType instead of InputRealPtrsType I get the error error C2100: you cannot dereference an operand of type const void`
static void mdlOutputs(SimStruct *S, int_T tid)
{
int_T i;
// replced ssGetInputPortRealSignalPtrs(S,0) with ssGetInputPortSignalPtrs
InputRealPtrsType uPtrs = ssGetInputPortSignalPtrs(S,0);
// replaced ssGetOutputPortRealSignal(S,0) with ssGetOutputPortSignal
real_T *y = ssGetOutputPortSignal(S,0);
int_T width = ssGetOutputPortWidth(S,0);
for (i=0; i<width; i++) {
*y++ = 2.0 *(*uPtrs[i]);
}
} I’m having a tough time attempting to use a C code block that needs uint8 array input and output . I’ve tried starting from a number of legacy code examples but encounter crashes every time my C code actually tries to access the incoming memory. If someone can point me to a working example (I’ve already tried everything that looked relevant here) I’d appreciate it. In my latest attempt I gave up on using the legacy code functions and took code from this example Create a Basic C MEX S-Function – MATLAB & Simulink (mathworks.com) which at least runs without crashing, and started to change it to take uint8 in/out as below. However I’ve no idea where to find the equivalent of InputRealPtrsType for unit8_T . If I use InputPtrsType instead of InputRealPtrsType I get the error error C2100: you cannot dereference an operand of type const void`
static void mdlOutputs(SimStruct *S, int_T tid)
{
int_T i;
// replced ssGetInputPortRealSignalPtrs(S,0) with ssGetInputPortSignalPtrs
InputRealPtrsType uPtrs = ssGetInputPortSignalPtrs(S,0);
// replaced ssGetOutputPortRealSignal(S,0) with ssGetOutputPortSignal
real_T *y = ssGetOutputPortSignal(S,0);
int_T width = ssGetOutputPortWidth(S,0);
for (i=0; i<width; i++) {
*y++ = 2.0 *(*uPtrs[i]);
}
} sfunction, c-code, legacy code MATLAB Answers — New Questions
Initializing data store memory for Simulink global variables
For all functions called by Simulink function block, Simulink requires the declaration of all global variables. Also these variables need to be initialized in Data store memory. Now if any of such global variables is an array, whose initial value is an empty matrix [ ] (as its size changes during the program execution), then the Simulink is not accepting the bare initial value of [ ] for any variable’s data store memory. How to handle this situation, as there are too many such variables.For all functions called by Simulink function block, Simulink requires the declaration of all global variables. Also these variables need to be initialized in Data store memory. Now if any of such global variables is an array, whose initial value is an empty matrix [ ] (as its size changes during the program execution), then the Simulink is not accepting the bare initial value of [ ] for any variable’s data store memory. How to handle this situation, as there are too many such variables. For all functions called by Simulink function block, Simulink requires the declaration of all global variables. Also these variables need to be initialized in Data store memory. Now if any of such global variables is an array, whose initial value is an empty matrix [ ] (as its size changes during the program execution), then the Simulink is not accepting the bare initial value of [ ] for any variable’s data store memory. How to handle this situation, as there are too many such variables. simulink, data store memory, initialization MATLAB Answers — New Questions
Auto completar datos de clientes
Hola buen día, en mi negocio tengo varios clientes y varios proveedores, llevo un control general de los suministros de cada cliente con fecha, volumen, producto suministrado, que proveedor dio servicio y otros datos más.
Me interesa tener otro archivo por cliente y cada que yo llene en la hoja general se pasen en automático ciertos datos a la pestaña del cliente, hay forma de lograr esto?
Gracias y espero comentarios.
Hola buen día, en mi negocio tengo varios clientes y varios proveedores, llevo un control general de los suministros de cada cliente con fecha, volumen, producto suministrado, que proveedor dio servicio y otros datos más.Me interesa tener otro archivo por cliente y cada que yo llene en la hoja general se pasen en automático ciertos datos a la pestaña del cliente, hay forma de lograr esto? Gracias y espero comentarios. Read More
Line style and color for pivot chart with multiple legend items.
I have a pivot chart with 2 entries in Legend (Series). I would like to have the first entry determine the line style (solid, dashed, dotted, etc ) and the second one determine the line color.
So if for instance the first entry has 4 values and the second one has 5, I would expect 4 different styles and 5 different colors for the 20 lines. Instead I get 20 different colors, which is not a very useful way of representing multi-dimensional data. Is there a way to change this?
I have a pivot chart with 2 entries in Legend (Series). I would like to have the first entry determine the line style (solid, dashed, dotted, etc ) and the second one determine the line color.So if for instance the first entry has 4 values and the second one has 5, I would expect 4 different styles and 5 different colors for the 20 lines. Instead I get 20 different colors, which is not a very useful way of representing multi-dimensional data. Is there a way to change this? Read More
Cash in Microsoft Incentives !
Hey amazing ISV community !
We’ve got something awesome coming up, and I wanted to get you in the loop! On September 30th, we’re hosting a webinar all about Microsoft incentives for FY 2024-2025, and trust me, this is a must-attend for any ISV out there.
Why this is going to be 🔥:
For ISVs who aren’t transactable yet: This webinar is exactly what you need. We’re talking actionable steps to unlock revenue by getting onto the Marketplace. The final push you need to go from “thinking about it” to seeing real $$.For ISVs who are already transactable: It’s all about doubling down. We’ll show you how to take full advantage of Microsoft’s incentives and commit more to the Marketplace for even bigger returns.
Webinar Details:
Date: September 30thTime:Morning Session: 10:00 AM – 10:30 AM (CEST) [Link to register]Afternoon Session: 6:00 PM – 6:30 PM (CEST) [Link to register]Topic: Strategies for Leveraging Microsoft Incentives in FY 2024-2025
This webinar is tailored to help you understand the various Microsoft Marketplace incentive programs available and how to strategically apply them to drive growth.
Register Here:
[Morning Session] – EMEA timezone[Afternoon Session] – AMEIRCAs timezone
Looking forward to see you there !
Hey amazing ISV community !We’ve got something awesome coming up, and I wanted to get you in the loop! On September 30th, we’re hosting a webinar all about Microsoft incentives for FY 2024-2025, and trust me, this is a must-attend for any ISV out there.Why this is going to be 🔥:For ISVs who aren’t transactable yet: This webinar is exactly what you need. We’re talking actionable steps to unlock revenue by getting onto the Marketplace. The final push you need to go from “thinking about it” to seeing real $$.For ISVs who are already transactable: It’s all about doubling down. We’ll show you how to take full advantage of Microsoft’s incentives and commit more to the Marketplace for even bigger returns.Webinar Details:Date: September 30thTime:Morning Session: 10:00 AM – 10:30 AM (CEST) [Link to register]Afternoon Session: 6:00 PM – 6:30 PM (CEST) [Link to register]Topic: Strategies for Leveraging Microsoft Incentives in FY 2024-2025This webinar is tailored to help you understand the various Microsoft Marketplace incentive programs available and how to strategically apply them to drive growth. Register Here:[Morning Session] – EMEA timezone[Afternoon Session] – AMEIRCAs timezoneLooking forward to see you there ! Read More
Excel not password protecting VBA page properly…
Whenever I try to hide sheets and then password protect them in the VBA page it never works. Whenever I go back into the VBA page it just gives me immediate and full access without prompting me for a password. Any ideas? I am using Excel 365.
Thanks
Whenever I try to hide sheets and then password protect them in the VBA page it never works. Whenever I go back into the VBA page it just gives me immediate and full access without prompting me for a password. Any ideas? I am using Excel 365. Thanks Read More
Favicon not updating on Bing – thingstoconsidertoday.com
Hello,
I manage a website thingstoconsidertoday.com. I am trying to update the favicon to our logo, but it does not display on Bing. It displays on Google just fine.
Any thoughts on how to push this favicon so that Bing updates?
Thanks,
Ryan
Hello, I manage a website thingstoconsidertoday.com. I am trying to update the favicon to our logo, but it does not display on Bing. It displays on Google just fine. Any thoughts on how to push this favicon so that Bing updates? Thanks,Ryan Read More
Need Help with Windows Server 2022 License Activation After VM Crash
Hello Community,
I’m facing an issue with my Windows Server 2022 license and would appreciate some guidance.
Details:
I purchased a Windows Server 2022 license and activated it on two VMs.Recently, both VMs crashed, and I’m now trying to activate the license on two new VMs.However, I am encountering an error stating that the activation limit has been exceeded.
I understand that each license has an activation limit, but in cases where servers or VMs crash, what are my options for reusing the license on new VMs? How can I resolve the activation error and ensure my license is properly applied to the new servers?
Any advice on the correct steps to take, or if there’s a way to reset the activation count or transfer the license, would be greatly appreciated.
Thank you in advance for your help!
Hello Community,I’m facing an issue with my Windows Server 2022 license and would appreciate some guidance.Details:I purchased a Windows Server 2022 license and activated it on two VMs.Recently, both VMs crashed, and I’m now trying to activate the license on two new VMs.However, I am encountering an error stating that the activation limit has been exceeded.I understand that each license has an activation limit, but in cases where servers or VMs crash, what are my options for reusing the license on new VMs? How can I resolve the activation error and ensure my license is properly applied to the new servers?Any advice on the correct steps to take, or if there’s a way to reset the activation count or transfer the license, would be greatly appreciated.Thank you in advance for your help! Read More
Help with migration concepts
Good morning to everyone.
I have a couple of questions that I hope you can help me resolve. These questions are related to Exchange Server 2013 (I know that this product is out of support and that is why we are migrating it to Exchange Server 2019).
This is my scenario:
I have a main site with 2 Exchange Server 2013 CU10 servers, I have a site with 1 Exchange Server 2013 CU23 server, I have another site with 1 Exchange Server 2013 CU23 server. The main idea is to migrate all mail servers to Exchange Server 2019.
These are the questions:
1. Is it possible to install a new server with Exchange Server 2019 on the main site without upgrading the remaining servers with Exchange Server 2013 to CU23?
2. What will happen to the mail flow after installing the new server with Exchange Server 2019? Will the main server with Exchange Server 2013 continue to manage the internal and external mail flow? Or will the new server with Exchange Server 2019 manage the mail flow?
3. In case the new server with Exchange Server 2019 is the one that manages the mail flow, is there a possibility that the server with Exchange Server 2013 will manage the mail flow until the migration is finished?
Thank you for your time and collaboration
Good morning to everyone.I have a couple of questions that I hope you can help me resolve. These questions are related to Exchange Server 2013 (I know that this product is out of support and that is why we are migrating it to Exchange Server 2019).This is my scenario:I have a main site with 2 Exchange Server 2013 CU10 servers, I have a site with 1 Exchange Server 2013 CU23 server, I have another site with 1 Exchange Server 2013 CU23 server. The main idea is to migrate all mail servers to Exchange Server 2019.These are the questions:1. Is it possible to install a new server with Exchange Server 2019 on the main site without upgrading the remaining servers with Exchange Server 2013 to CU23?2. What will happen to the mail flow after installing the new server with Exchange Server 2019? Will the main server with Exchange Server 2013 continue to manage the internal and external mail flow? Or will the new server with Exchange Server 2019 manage the mail flow?3. In case the new server with Exchange Server 2019 is the one that manages the mail flow, is there a possibility that the server with Exchange Server 2013 will manage the mail flow until the migration is finished?Thank you for your time and collaboration Read More
IIS Logs have Incorrect Date Modified
I have a server that is creating daily IIS logs (stored local) with a timestamp in the Date Modified that have the incorrect date for the “current” log.
Example: Today is 9-10-2024, the current log is named correctly(u_ex240910.log), has correct information inside, but the Date Modified Timestamp is 9-9-2024 7:00PM. There is also a log file for u_ex240909.log) which has correct information in it as well. I have dozens of IIS servers, and this is not an issue on the rest of them. The Logging feature in IIS Manager is setup identical on this issue server and working servers so I am stumped.
Screenshot of “problem” server.
Screenshot of “working” server:
Screenshot of Logging setup in IIS Manager(which is identical on both trouble and working servers):
I have a server that is creating daily IIS logs (stored local) with a timestamp in the Date Modified that have the incorrect date for the “current” log. Example: Today is 9-10-2024, the current log is named correctly(u_ex240910.log), has correct information inside, but the Date Modified Timestamp is 9-9-2024 7:00PM. There is also a log file for u_ex240909.log) which has correct information in it as well. I have dozens of IIS servers, and this is not an issue on the rest of them. The Logging feature in IIS Manager is setup identical on this issue server and working servers so I am stumped. Screenshot of “problem” server. Screenshot of “working” server: Screenshot of Logging setup in IIS Manager(which is identical on both trouble and working servers): Read More
Using a Calculated End Date in the Modern SharePoint Calendar View Drop-Down
Hello All: I recently created an end date as a calculated field in Microsoft SharePoint List. The calcuation’s data type returned is the date and time format. I want to use this new end date as an actual end date in my SharePoint Calendar view. Unfortunately, the end date does not appear in the calendar view drop down because it is not considered a “Date and Time” type. How do I convert my newly calculated end date to the Type “date and time” so that it will appear in the Calendar view end date drop down menu? This calculation works in the classic sharepoint , but renders no value in the modern SharePoint. I did see a similar request from Waqas in February 6, 2024. The response from Sophia Papadopoulos does not work. Also, the Microsoft Moderator offered a response that was not helpful. See below:”We went through your post carefully and do understand your great idea of importing the calculated end date as an actual date into a calendar to arrange tasks efficiently. But we are really sorry to convey that it seems like we also failed to achieve it from our tests. Given this situation, I sincerely recommend you use Feedback Community to suggest this feature limitation and add your valuable idea in the SharePoint Feedback Community (microsoft.com) which is the best place to share ideas directly with the product building team and improve the Microsoft Products. ” Does anyone have a work-around to this issue? Please let me know. It is amazing how the end date calucation is able to be picked up by the calendar view in the Classic version and not the Moderate version. I look forward to your help. Thank you. BW Read More
Task Start and Finish dates not in synch with Assignments view
When I allocate hours per task and resource in the Assignments view, the Start and Finish date of the task are set according to hour allocation – but only in the Assignments view!
When I go back to the Grid view and look at the start and finish dates for the tasks, they do not correspond to the start and finish dates which I see in the Assignments view.
Is there a way to fix this? I think the Assignments view is a great feature and I would love to use it for my project planning, but if the start and finish dates are not synchronized to the actual work allocation, this is a major drawback.
When I allocate hours per task and resource in the Assignments view, the Start and Finish date of the task are set according to hour allocation – but only in the Assignments view!When I go back to the Grid view and look at the start and finish dates for the tasks, they do not correspond to the start and finish dates which I see in the Assignments view.Is there a way to fix this? I think the Assignments view is a great feature and I would love to use it for my project planning, but if the start and finish dates are not synchronized to the actual work allocation, this is a major drawback. Read More