Month: September 2024
Copilot’s Automatic Summary for Word Documents
Automatic Document Summary in a Bulleted List
Last week, I referenced the update for Word where Copilot for Microsoft 365 generates an automatic summary for documents. This is covered in message center notification MC871010 (Microsoft 365 roadmap item 399921). Automatic summaries are included in Copilot for Microsoft 365 and Microsoft Copilot Pro (the version that doesn’t ground prompts using Graph data).
As soon as I published the article where I referred to the feature, it turned up in Word. Figure 1 shows the automatic summary generated for a document (in this case, the source of an article).
The summary is the same output as the bulleted list Copilot will generate if you open the Copilot pane and ask Copilot to summarize this doc. Clicking the Ask a question button opens the Copilot pane with the summary prepopulated ready for the user to delve deeper into the summary.
The summary is only available after a document is saved and closed. The next time someone opens the document, the summary pane appears at the top of the document and Copilot generates the summary. The pane remains at the top of the document and doesn’t appear on every page. If Copilot thinks it necessary (for instance, if more text is added to a document), it displays a Check for new summary button to prompt the user to ask Copilot to regenerate the summary.
Apart from removing the Copilot license from an account (in which case the summaries don’t appear), there doesn’t seem to be a way to disable the feature. You can collapse the summary, but it’s still there and can be expanded at any time.
Summarizing Large Word Documents
When Microsoft launched Copilot support for Word, several restrictions existed. For instance, Word couldn’t ground user prompts against internet content. More importantly, summarization could only handle relatively small documents. The guidance was that Word could handle documents with up to 15,000 words but would struggle thereafter.
This sounds a lot, and it’s probably enough to handle a large percentage of the documents generated within office environments. However, summaries really come into their own when they extract information from large documents commonly found in contracts and plans. The restriction, resulting from the size of the prompt that could be sent to the LLM, proved to be a big issue.
Microsoft responded in in August 2024 with an announcement that Word could now summarize documents of up to 80,000 words. In their text, Microsoft says that the new limit is four times greater than the previous limit. The new limit is rolling out for desktop, mobile, and browser versions of Word. For Windows, the increased limit is available in Version 2310 (Build 16919.20000) or later.
Processing Even Larger Word Documents
Eighty thousand words sounds a lot. At an average of 650 words per page, that’s 123 pages filled with text. I wanted to see how Copilot summaries coped with larger documents.
According to this source, the maximum size of a text-only Word document is 32 MB. With other elements included, the theoretical size extends to 512 MB. I don’t have documents quite that big, but I do have the source document for the Office 365 for IT Pros eBook. At 1,242 pages and 679,800 characters, including many figures, tables, cross-references, and so on, the file size is 29.4 MB.
Copilot attempted to generate a summary for Office 365 for IT Pros but failed. This wasn’t surprising because the file is so much larger than the maximum supported.
The current size of the Automating Microsoft 365 with PowerShell eBook file is 1.72 MB and spans 113,600 words in 255 pages. That’s much closer to the documented limit, and Copilot was able to generate a summary (Figure 2).
Although the bulleted list contains information extracted from the file, it doesn’t reflect the true content of the document because Copilot was unable to send the entire file to the LLM for processing. The bulleted list comes from the first two of four chapters and completely ignores the chapters dealing with the Graph API and Microsoft Graph PowerShell SDK.
Summaries For Standard Documents
Microsoft hasn’t published any documentation that I can find for Copilot’s automatic document summary feature. When it appears, perhaps the documentation will describe how to disable the feature for those who don’t want it. If not, we’ll just have to cope with automatic summaries. At least they will work for regular Word documents of less than 80,000 words.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive monthly insights into what happens, why it happens, and what new features and capabilities mean for your tenant.
Teams Improves Text Pasting and Mic Pending
Who Thought that Including Metadata in Teams Pasted Text Was a Good Idea?
In an example of finally listening to user feedback, Microsoft announced in MC878422 (30 August 2024) that Teams no longer includes metadata in messages copied from chats or channel conversations. The change is effective now and means that instead of having Teams insert a timestamp and the name of the person who created the text, only the text is pasted. This is exactly the way the feature should have worked since day zero. Quite why anyone thought it was a good idea to insert additional information into copied text is one of the great mysteries of Teams development.
MC878422 notes: “Many users have voiced frustrations over copying messages in Teams, particularly the inclusion of metadata like names and timestamps. Customer feedback has been clear, signaling that this feature was adding more noise than value to user workflow.”
Copying Metadata is An Old Lync Feature
It seems likely that inserting the timestamp and author name is an idea that came to Teams from Lync Server 2013 and Skype for Business. A support article from the time describes how to change the default setting of copying message, name, and time to copying just the message. Nearly eight years after Teams entered preview in November 2016, the opportunity to update a setting as in Lync Server 2013 never appeared. The net result is that Teams users had to manually remove the unwanted metadata from copied text after pasting it into another app. Thankfully, the change “helps maintain focus and reduces unnecessary noise.”
I’ve no idea about how many of the 320 million monthly active Teams users found this aspect of the product annoying, but it’s been high up on my list along with in-product advertising and a constant stream of irritating pop-up messages.
Mic Pending is a Feature You Probably Never Knew Exists
In a more positive note, Juan Rivera, Corporate Vice President @ Microsoft. Teams Calling, Meetings & Devices Engineering posted on LinkedIn about a feature called Mic Pending state, which apparently is now rolled out to all tenants.
I have never thought much about the process required to implement the mute/unmute button in a call, but apparently Microsoft has done the work to make sure that when users hit the mic button (Figure 1), the action occurs immediately. If something gets in the way to prevent mute/unmute happening, Teams displays a “pending” icon if it notices that the action has taken more than 100 milliseconds.
Figure 1: The Teams mute mic button now works with 99.99+% reliability
The issue being addressed is to make sure that people have confidence that Teams will mute their microphone immediately they press the button and unmute the microphone in a similarly effective manner. It seems like some folks have been caught by a delay in muting. The button displayed in a Teams meeting showed that the microphone was off when it was still live. You can see how this could end up with something being heard or captured on a Teams recording that people would have preferred not to have been captured. Calling your boss a flaming idiot over an open microphone that you thought was muted is possibly not a good thing to do.
According to the post, Microsoft believe that Teams delivers 99.99+% reliability for the mute/unmute toggle, which should mean that the status for the microphone shown on screen can be trusted. Of course, the paranoid amongst us will always give a microphone two or three seconds before we consider it to be truly off.
Two Good Changes
The one thing about Teams is that it’s always changing. People like the Office 365 for IT Pros writing team have no shortage of topics to cover when it comes to Teams. Thankfully, the two topics covered here are both positive, even if mic pending hasn’t come to our attention before.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
10 istilah AI (lanjutan) yang perlu Anda ketahui
Read the English version here
Jakarta, 4 September 2024 – Sejak generative artificial intelligence (AI) menjadi semakin populer pada akhir tahun 2022, sebagian besar dari kita telah memperoleh pemahaman dasar tentang teknologi tersebut dan bagaimana teknologi ini menggunakan bahasa sehari-hari untuk membantu kita berinteraksi dengan komputer secara lebih mudah. Beberapa dari kita bahkan telah menggunakan jargon seperti “prompt” dan “machine learning” sambil minum kopi santai bersama teman-teman. Pada akhir 2023 lalu, Microsoft telah merangkumkan 10 istilah AI yang perlu Anda ketahui. Namun, seiring dengan berkembangnya AI, istilah-istilah ini juga terus berkembang. Tahukah Anda perbedaan antara model bahasa besar dan kecil? Atau apa kepanjangan dari “GPT” di ChatGPT? Berikut ini adalah sepuluh kosa kata AI tingkat lanjut yang perlu Anda ketahui.
Penalaran (reasoning)/perencanaan (planning)
Komputer yang menggunakan AI kini dapat memecahkan masalah dan menyelesaikan tugas dengan menggunakan pola yang telah mereka pelajari dari data historis untuk memahami informasi. Proses ini mirip dengan penalaran atau proses berpikir logis. Sistem AI yang paling canggih menunjukkan kemampuan untuk melangkah lebih jauh dari ini dan dapat mengatasi masalah yang semakin kompleks dengan membuat perencanaan. Ia bisa merancang urutan tindakan yang perlu diterapkan untuk mencapai tujuan tertentu.
Sebagai contoh, bayangkan Anda meminta bantuan program AI untuk membuat rencana perjalanan ke taman bermain. Anda menulis “saya ingin mengunjungi enam wahana berbeda di taman bermain X, termasuk wahana air di waktu terpanas di hari Sabtu, 5 Oktober”. Berdasarkan tujuan Anda tersebut, sistem AI dapat memecahnya menjadi langkah-langkah kecil untuk membuat jadwal sambil menggunakan penalaran, untuk memastikan Anda tidak mengunjungi wahana yang sama dua kali, dan bahwa Anda bisa menaiki wahana air antara jam 12 siang sampai jam 3 sore.
Pelatihan (training)/inferensi (inference)
Ada dua langkah yang dilakukan untuk membuat dan menggunakan sistem AI: pelatihan dan inferensi. Pelatihan adalah aktivitas mendidik sistem AI di mana ia akan diberikan dataset, dan sistem AI tersebut belajar melakukan tugas atau membuat prediksi berdasarkan data tersebut. Misalnya, sistem AI diberikan daftar harga rumah yang baru-baru ini dijual di suatu lingkungan, lengkap dengan jumlah kamar tidur dan kamar mandi di masing-masing rumah tersebut dan banyak variabel lainnya. Selama pelatihan, sistem AI akan menyesuaikan parameter internalnya. Parameter internal yang dimaksud merupakan sebuah nilai yang menentukan berapa banyak bobot yang harus diberikan terhadap tiap variabel, dan bagaimana ia memengaruhi harga jual rumah. Sementara itu, inferensi adalah ketika sistem AI menggunakan pola dan parameter yang telah dipelajari tadi untuk menghasilkan prediksi harga untuk rumah yang baru akan dipasarkan di masa depan.
Model bahasa kecil (small language model / SLM)
Model bahasa kecil, atau SLM, adalah versi mini dari model bahasa besar, atau large language models (LLM). Keduanya menggunakan teknik pembelajaran mesin (machine learning) untuk membantu mereka mengenali pola dan hubungan, sehingga mereka dapat menghasilkan respons dalam bahasa sehari-hari yang realistis. Jika LLM berukuran sangat besar dan membutuhkan daya komputasi dan memori yang besar, SLM seperti Phi-3 dilatih menggunakan dataset lebih kecil yang terkurasi dan memiliki parameter yang lebih sedikit, sehingga lebih ringkas dan bahkan dapat digunakan secara offline alias tanpa koneksi internet. Ini membuatnya cocok diaplikasikan di perangkat seperti laptop atau ponsel, di mana Anda mungkin ingin mengajukan pertanyaan sederhana tentang perawatan hewan peliharaan, tetapi tidak perlu mengetahui informasi terperinci mengenai cara melatih anjing pemandu.
Grounding
Sistem generative AI dapat menyusun cerita, puisi, dan lelucon, serta menjawab pertanyaan penelitian. Tetapi terkadang mereka kesulitan membedakan fakta dan fiksi, atau mungkin data pelatihan mereka sudah ketinggalan zaman, sehingga sistem AI dapat memberikan tanggapan yang tidak akurat—suatu kejadian yang disebut sebagai halusinasi. Developers bekerja untuk membantu AI berinteraksi dengan dunia nyata secara akurat melalui proses grounding. Ini adalah proses ketika developers menghubungkan dan menambatkan model mereka dengan data dan contoh nyata untuk meningkatkan akurasi dan menghasilkan output yang lebih relevan secara kontekstual dan dipersonalisasi.
Retrieval Augmented Generation (RAG)
Ketika developers memberikan akses sistem AI ke sumber grounding untuk membantunya menjadi lebih akurat dan terkini, mereka menggunakan metode yang disebut Retrieval Augmented Generation atau RAG. Pola RAG menghemat waktu dan sumber daya dengan memberikan pengetahuan tambahan tanpa harus melatih ulang program AI.
Ini seolah-olah Anda adalah detektif Sherlock Holmes dan Anda telah membaca setiap buku di perpustakaan tetapi belum bisa memecahkan suatu kasus, jadi Anda naik ke loteng, membuka beberapa gulungan naskah kuno, dan voilà — Anda menemukan potongan teka-teki yang hilang. Sebagai contoh lain, jika Anda memiliki perusahaan pakaian dan ingin membuat chatbot yang dapat menjawab pertanyaan khusus terkait produk Anda, Anda dapat menggunakan pola RAG di katalog produk Anda untuk membantu pelanggan menemukan sweater hijau yang sempurna dari toko Anda.
Orkestrasi (Orchestration)
Program AI perlu melakukan banyak hal saat memproses permintaan pengguna. Untuk memastikan sistem AI ini melakukan semua tugas dalam urutan yang benar demi menghasilkan respons terbaik, seluruh tugas ini diatur oleh lapisan orkestrasi.
Sebagai contoh, jika Anda bertanya kepada Microsoft Copilot “siapa Ada Lovelace”, dan kemudian menanyakan Copilot “kapan dia lahir” di prompt selanjutnya, orkestrator AI di sini menyimpan riwayat obrolan Anda untuk melihat bahwa kata “dia” di prompt kedua itu merujuk pada Ada Lovelace.
Lapisan orkestrasi juga dapat mengikuti pola RAG dengan mencari informasi segar di internet untuk ditambahkan ke dalam konteks dan membantu model menghasilkan jawaban yang lebih baik. Ini seperti seorang maestro yang mengisyaratkan pemain biola dan kemudian seruling dan oboe, sambil mengikuti lembaran musik untuk menghasilkan suara yang ada dalam benak komposer.
Memori
Model AI saat ini secara teknis tidak memiliki memori. Tetapi program AI dapat mengatur instruksi yang membantu mereka “mengingat” informasi dengan mengikuti langkah-langkah spesifik dengan setiap interaksi — seperti menyimpan pertanyaan dan jawaban sebelumnya dalam obrolan secara sementara, dan kemudian memasukkan konteks itu dalam permintaan model saat ini, atau menggunakan data grounding dari pola RAG untuk memastikan respons yang diberikan menggunakan informasi terbaru. Developers bereksperimen dengan lapisan orkestrasi untuk membantu sistem AI mengetahui apakah mereka perlu mengingat rincian langkah secara sementara, misalnya — memori jangka pendek, seperti mencatat di sticky note — atau apakah akan lebih berguna jika sistem AI mengingat sesuatu dalam jangka waktu yang lebih lama dengan menyimpannya di lokasi yang lebih permanen.
Transformer models dan diffusion models
Orang-orang telah mengajarkan sistem AI untuk memahami dan menghasilkan bahasa selama beberapa dekade, tetapi salah satu terobosan yang mempercepat kemajuan baru-baru ini adalah transformer models. Di antara model generative AI, tranformer adalah model yang memahami konteks dan nuansa terbaik dan tercepat. Mereka adalah pendongeng yang fasih, memperhatikan pola data dan mempertimbangkan pentingnya input yang berbeda untuk membantu mereka dengan cepat memprediksi apa yang akan terjadi selanjutnya, sehingga memungkinkan mereka menghasilkan teks. Bahkan transformer adalah huruf T di ChatGPT — Generative Pre-trained Transformer. Sementara itu, diffusion models yang umumnya digunakan untuk pembuatan gambar menambahkan sentuhan baru dengan bekerja secara lebih bertahap dan metodis, menyebarkan piksel gambar dari posisi acak hingga didistribusikan sampai membentuk gambar yang diminta dalam prompt. Diffusion models terus membuat perubahan kecil sampai mereka menciptakan output yang sesuai dengan kebutuhan pengguna.
Frontier models
Frontier models adalah sistem skala besar yang mendorong batas-batas AI dan dapat melakukan berbagai macam tugas dengan kemampuan baru yang lebih luas. Mereka bisa sangat maju sehingga terkadang kita terkejut dengan apa yang dapat mereka capai. Perusahaan teknologi termasuk Microsoft membentuk Frontier Model Forum untuk berbagi pengetahuan, menetapkan standar keamanan, dan membantu semua orang memahami program AI yang kuat ini guna memastikan pengembangan yang aman dan bertanggung jawab.
GPU
GPU, yang merupakan singkatan dari Graphics Processing Unit, pada dasarnya adalah kalkulator bertenaga turbo. GPU awalnya dirancang untuk menghaluskan grafis fantastis dalam video game, dan kini menjadi otot dari komputasi. Chip-nya memiliki banyak core kecil, yakni jaringan sirkuit dan transistor, yang menangani masalah matematika secara bersama-sama, atau disebut juga sebagai pemrosesan paralel. Hal ini pada dasarnya sama dengan yang AI lakukan — memecahkan banyak perhitungan dalam skala besar untuk dapat berkomunikasi dalam bahasa manusia dan mengenali gambar atau suara. Karena itu, platform AI sangat memerlukan GPU, baik untuk pelatihan dan inferensi. Faktanya, model AI paling canggih saat ini dilatih menggunakan serangkaian besar GPU yang saling berhubungan — terkadang berjumlah puluhan ribu dan tersebar di pusat data raksasa — seperti yang dimiliki Microsoft di Azure, yang merupakan salah satu komputer paling kuat yang pernah dibuat.
Pelajari selengkapnya tentang berita AI terbaru di Microsoft Source dan berita kami di Indonesia melalui halaman ini.
-SELESAI-
Transferring Reusable PowerShell Objects Between Microsoft 365 Tenants
The Graph SDK’s ToJsonString Method Proves Its Worth
One of the frustrations about using the internet is when you find some code that seems useful, copy the code to try it out in your tenant, and discover that some formatting issue prevents the code from running. Many reasons cause this to happen. Sometimes it’s as simple as an error when copying code into a web editor, and sometimes errors creep in after copying the code, perhaps when formatting it for display. I guess fixing the problems is an opportunity to learn what the code really does.
Answers created by generative AI solutions like ChatGPT, Copilot for Microsoft 365, and GitHub Copilot compound the problem by faithfully reproducing errors in its responses. This is no fault of the technology, which works by creating answers from what’s gone before. If published code includes a formatting error, generative AI is unlikely to find and fix the problem.
Dealing with JSON Payloads
All of which brings me to a variation on the problem. The documentation for Graph APIs used to create or update objects usually include an example of a JSON-formatted payload containing the parameter values for the request. The Graph API interpret the JSON content in the payload to extract the parameters to run a request. By comparison, Microsoft Graph PowerShell SDK cmdlets use hash tables and arrays to pass parameters. The hash tables and arrays mimic the elements of the JSON structure used by the underlying Graph APIs.
Composing a JSON payload is no challenge If you can write perfect JSON. Like any other rules for programming or formatting, it takes time to become fluent with JSON, and who can afford that time when other work exists to be done? Here’s a way to make things easier.
Every object generated by a Graph SDK cmdlet has a ToJsonString method to create a JSON-formatted version of the object. For example:
$User = Get-MgUser -UserId Kim.Akers@office365itpros.com
$UserJson = $User.ToJsonString()
$UserJson
{
“@odata.context”: “https://graph.microsoft.com/v1.0/$metadata#users/$entity”,
“id”: “d36b323a-32c3-4ca5-a4a5-2f7b4fbef31c”,
“businessPhones”: [ “+1 713 633-5141” ],
“displayName”: “Kim Akers (She/Her)”,
“givenName”: “Kim”,
“jobTitle”: “VP Marketing”,
“mail”: “Kim.Akers@office365itpros.com”,
“mobilePhone”: “+1 761 504-0011”,
“officeLocation”: “NYC”,
“preferredLanguage”: “en-US”,
“surname”: “Akers”,
“userPrincipalName”: Kim.Akers@office365itpros.com
}
The advantages of using the ToJsonString method instead of PowerShell’s ConvertTo-JSON cmdlet is that the method doesn’t output properties with empty values. This makes the resulting output easier to review and manage. For instance, the JSON content shown above is a lot easier to use as a template for adding new user accounts than the equivalent generated by ConvertTo-JSON.
Transferring a Conditional Access Policy Using ToJsonString
The output generated by ToJsonString becomes very interesting when you want to move objects between tenants. For example, let’s assume that you use a test tenant to create and fine tune a conditional access policy. The next piece of work is to transfer the conditional access policy from the test tenant to the production environment. Here’s how I make the transfer:
Run the Get-MgIdentityConditionalAccessPolicy cmdlet to find the target policy and export its settings to JSON. Then save the JSON content in a text file.
$Policy = Get-MgIdentityConditionalAccessPolicy -ConditionalAccessPolicyId ‘1d4063cb-5ebf-4676-bfca-3775d7160b65’
$PolicyJson = $Policy.toJsonString()
$PolicyJson > PolicyExport.txt
Edit the text file to replace any tenant-specific items with equivalent values for the target tenant. For instance, conditional access policies usually include an exclusion for break glass accounts, which are listed in the policy using the account identifiers. In this case, you need to replace the account identifiers for the source tenant in the exported text file with the account identifiers for the break glass account for the target tenant.
Disconnect from the source tenant.
Connect to the target tenant with the Policy.ReadWrite.ConditionalAccess scope.
Create a variable ($Body in this example) containing the conditional policy settings.
Run the Invoke-MgGraph-Request cmdlet to import the policy definition into the target tenant.
$Uri = “https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies”
Invoke-MgGraphRequest -uri $uri -method Post -Body $Body
The Other Way
Another way to create a conditional access policy with PowerShell is to run the New-MgIdentityConditionalAccessPolicy cmdlet, which takes a hash table as its payload. It’s easy to translate the JSON into the format used for parameter values stored in the hash table, but it’s even easier to run Invoke-MgGraphRequest and pass the edited version of the JSON exported from the source tenant. Why make things hard for yourself?
This tip is just one of the hundreds included the Automating Microsoft 365 with PowerShell eBook (available separately, as part of the Office 365 for IT Pros (2025 edition) bundle, or as a paperback from Amazon.com).
undefined symbol xcb_shm_id when trying to startup MatLab
When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’ When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’ libcairo MATLAB Answers — New Questions
Intro to matlab lab and I have no idea how this works
<</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost.<</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost. <</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost. vector, vectors, variable MATLAB Answers — New Questions
Poor performance of linprog in practice
I have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILEI have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILE I have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILE linprog, performance MATLAB Answers — New Questions
How to import .EEG or text or excel file to EEGlab
Hi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab softwareHi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab software Hi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab software eeg, eeglab, signal processing MATLAB Answers — New Questions
Conditional formating using formula
Hi,
I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2)
I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:
=MATCH(A2,Array1,0)
This only highlights a single cell, even if I try to apply the conditional format to the Table1
Can anyone help?
Thanks
Hi, I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2) I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:=MATCH(A2,Array1,0)This only highlights a single cell, even if I try to apply the conditional format to the Table1 Can anyone help?Thanks Read More
New Outlook:
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails.
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails. Read More
Migrating to 365 with 2 domains
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same?
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same? Read More
Upcoming marketplace webinars available in September
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
https://aka.ms/MTMwebinars
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners
Formula returning dash when I add a new cell
extremely frustrating I use this sheet to track my side job pay and it glitches everytime I try to edit it and returns 0. i am trying to add august to the gross pay total.
extremely frustrating I use this sheet to track my side job pay and it glitches everytime I try to edit it and returns 0. i am trying to add august to the gross pay total. Read More
Tasks
When I open Tasks I get “The task owner has restricted this action,” and “This list cannot be modified as it no longer exists.” I am horrified as I use it every day. I can’t modify the task in any way. How can I fix this?
When I open Tasks I get “The task owner has restricted this action,” and “This list cannot be modified as it no longer exists.” I am horrified as I use it every day. I can’t modify the task in any way. How can I fix this? Read More
A generalisation of the MAP lambda helper function
Discussion topic. Your thoughts are welcome.
On Saturday I finally bit the bullet and completed a MAPλ Lambda function that generalises the in-built MAP Lambda helper function. As examples, I tried problems of generating the Kronecker product of two matrices and then one of generating variants of an amortisation table.
The original amortisation schedule uses SCAN to calculate closing balances step by step from opening balances. Having returned the closing balances as an array, the principal is inserted at the first element to give opening balances. An array calculation based on the same code is used to return other values of interest using HSTACK.
Following that, I created the array of loan terms {10, 15, 20} (yrs) and used the formula
= MAPλ(variousTerms, AmortisationTableλ(principal, rate, startYear))
to generate
as a single spilt range.
I have posted a copy of MAPλ on GitHub
A version of Excel MAP helper function that will return an array of arrays (github.com)
The intention is that the function can be used without knowing how it works but you are, of course, welcome to try to pick through it.
Discussion topic. Your thoughts are welcome. On Saturday I finally bit the bullet and completed a MAPλ Lambda function that generalises the in-built MAP Lambda helper function. As examples, I tried problems of generating the Kronecker product of two matrices and then one of generating variants of an amortisation table. The original amortisation schedule uses SCAN to calculate closing balances step by step from opening balances. Having returned the closing balances as an array, the principal is inserted at the first element to give opening balances. An array calculation based on the same code is used to return other values of interest using HSTACK.Following that, I created the array of loan terms {10, 15, 20} (yrs) and used the formula = MAPλ(variousTerms, AmortisationTableλ(principal, rate, startYear)) to generateas a single spilt range. I have posted a copy of MAPλ on GitHub A version of Excel MAP helper function that will return an array of arrays (github.com)The intention is that the function can be used without knowing how it works but you are, of course, welcome to try to pick through it. Read More
Update Error for Windows 11 Insider Preview (10.0.26120.1542)
Hi!
When the update Windows 11 Insider Preview (10.0.26120.1542) started, it reached 1% and suddenly stopped.
I tried to run a Troubleshoot for Windows Update inside Configurations and it shows an error 0x803C010A and didn’t proceed as well.
Anyone solved this problem?
Thanks
Hi!When the update Windows 11 Insider Preview (10.0.26120.1542) started, it reached 1% and suddenly stopped.I tried to run a Troubleshoot for Windows Update inside Configurations and it shows an error 0x803C010A and didn’t proceed as well.Anyone solved this problem? Thanks Read More
How to sync Outlook Notes with Gmail account
I have Outlook 2021 desktop installed on my PC. I would like to sync the Outlook Notes:
with my Google Workspace account. Is this possible?
I have Outlook 2021 desktop installed on my PC. I would like to sync the Outlook Notes: with my Google Workspace account. Is this possible? Read More
Default SQL Server Connection for SSMS
SQL 2019 – SSMS 19.3.4.0
I was always wrongly under the impression that SSMS required a server connection in the Object Explorer to run a script against. We have databases with the same names on 2 servers as we’re preparing for migration and I accidentally ran a script on server B, even though there appeared to be no connection open to server B. Only Server A was connected in the object explorer. I was then shocked to find that any new sql script I opened was connected to server B which had been closed out in Object Explorer.
What controls the default server for a script when opening via File / Open in SSMS? What is the best way to lock a script to specific server or make it more obvious which server this is being applied to. I may need to get used to looking in the bottom right where it displays the SQL server, but I’d like to make it more fool proof.
I see activating SQLCMD Mode on the Query Menu is one option, but I wonder what the downside to this might be such that it is not default behaviour.
SQL 2019 – SSMS 19.3.4.0I was always wrongly under the impression that SSMS required a server connection in the Object Explorer to run a script against. We have databases with the same names on 2 servers as we’re preparing for migration and I accidentally ran a script on server B, even though there appeared to be no connection open to server B. Only Server A was connected in the object explorer. I was then shocked to find that any new sql script I opened was connected to server B which had been closed out in Object Explorer. What controls the default server for a script when opening via File / Open in SSMS? What is the best way to lock a script to specific server or make it more obvious which server this is being applied to. I may need to get used to looking in the bottom right where it displays the SQL server, but I’d like to make it more fool proof. I see activating SQLCMD Mode on the Query Menu is one option, but I wonder what the downside to this might be such that it is not default behaviour. Read More
AI Studio End-to-End Baseline Reference Implementation
Azure AI Studio is designed to cater to the growing needs of developers seeking to integrate advanced AI capabilities into their applications with a focus on operational excellence. Addressing key factors such as security, scalability, and regulatory adherence, Azure AI Studio ensures that AI deployments are seamless, sustainable, and strategically aligned with business objectives.
We’re excited to present the end-to-end baseline reference implementation for Azure AI Studio, a definitive guide designed to facilitate the deployment of AI workloads in the cloud. This architecture has been designed to assist organizations in finding structured solutions for deploying AI applications that are production ready in an enterprise environment at scale.
Features of the Baseline Architecture
This architecture incorporates several important features:
Secure Network Perimeter: Creates a secure boundary for AI applications with strict network security and segmentation capabilities.
Identity Management: Implements strong access management to regulate interactions and maintain secure operations within AI services and data.
Scalability: Provides a flexible infrastructure to support the growth of AI applications, ensuring performance is not sacrificed as demand increases.
Compliance and Governance: Maintains a commitment to following enterprise governance policies and meeting compliance standards throughout the life of an AI application.
Supported Scenarios of the Baseline Architecture
The reference architecture supports various important use cases, including:
AI Studio Project Playground: An integrated environment for engaging with Azure OpenAI technologies, where you can chat with your data, test out various AI-powered assistants, and utilize completion features for text. This tool serves as a one-stop shop to assess, refine, and validate your AI-driven projects.
Promptflow Workflows: This feature supports the development of complex AI workflows, integrating elements like custom Python scripts and large language model integrations, enhancing operational excellence.
Resilient, Managed Deployments: Manages the deployment of AI applications to Azure’s managed virtual networks, ensuring solid and dependable access via client UI hosted in Azure App Service.
Self-Hosting with Azure App Service: This alternative gives enterprises full control to customize and manage Promptflow deployment using Azure App Service leveraging advanced options such as availability zones.
You can find the reference implementation in the following link: aistudio-end-to-end-baseline-architecture
Microsoft Tech Community – Latest Blogs –Read More
¡Temporada de IA para Desarrolladores!
Si te apasiona la Inteligencia Artificial y el desarrollo de aplicaciones, no te pierdas la oportunidad de ver esta increíble serie de Microsoft Reactor. Durante la temporada, exploramos desde los fundamentos de Azure OpenAI hasta las últimas innovaciones presentadas en Microsoft Build 2024, finalizando con el potente framework Semantic Kernel para la creación de aplicaciones inteligentes. Todas las sesiones están cargadas de numerosos demos para que puedas comprender cada concepto y aplicarlo de manera efectiva.
Episodios:
Episodio 1: Introducción a Azure OpenAI
Exploramos los modelos de Azure OpenAI, sus capacidades, y cómo integrarlos con el SDK de Azure.
Episodio 2: Consideraciones para Implementar Modelos en Azure OpenAI
Aprendimos a gestionar la cuota del servicio, equilibrar rendimiento y latencia, planificar la gestión de costos, y aplicar el patrón RAG para optimizar tus implementaciones.
Episodio 3: Novedades de Microsoft Build: PHI3, GPT-4o, Azure Content Safety y más
Descubrimos las últimas novedades de Microsoft Build, incluyendo PHI 3, GPT-4o con capacidades multimodales, el nuevo Azure AI Studio, y Azure Content Safety.
Episodio 4: Comenzando con Semantic Kernel
Conocimos Semantic Kernel, un SDK de código abierto que permite integrar fácilmente LLM avanzados en tus aplicaciones para crear experiencias más inteligentes y naturales.
Episodio 5: Construye tu propio Copilot con Semantic Kernel
Aprendimos a utilizar Plugins, Planners y Memories de Semantic Kernel para crear copilotos que trabajan codo a codo con los usuarios, brindándoles sugerencias inteligentes para completar tareas.
-¡No te lo pierdas! Revive cada episodio para descubrir cómo puedes llevar tus aplicaciones al siguiente nivel con la IA de Microsoft.
-Obtén más información y desarrolla tus habilidades con la IA durante esta serie con esta colección de recursos de Microsoft Learn:
Speakers:
Luis Beltran – Microsoft MVP – LinkedIn
Pablo Piovano – Microsoft MVP – LinkedIn
Microsoft Tech Community – Latest Blogs –Read More