Resignation Letter Templatesfor McDonalds Workers

Do you need to write a resignation letter? Here are some of the best resignation letter examples and templates for a variety of circumstances you can use to leave your job, including basic and formal letters, email resignation messages, letters giving two weeks’ notice, letters with a reason for leaving, short notice or no notice letters, personal reasons letters, letters announcing a new job, and retirement letters.

Whatever the circumstances of your departure, the examples below can help you craft a polite and appropriate resignation letter. Even if you tell your manager in-person that you’re resigning, it’s a good idea to provide a formal letter with the details of your departure from the company.1

Get ideas on what information to include in your letter, as well as what information to leave out.

When writing your own resignation letter, you can use these examples for inspiration. You can find a template below that you can download and use to write your own letter.

Creating Data Driven Subscriptions for Power BI Reports

One of the features that has never made the leap from SQL Server Reporting Services (SSRS) on-premises to the cloud is data-driven subscriptions. Users can subscribe to reports, but a data-driven subscription allows individual subscriptions to be stored in a central location and parameterized, while delivering the reports to multiple locations. This article will describe a pattern for accomplishing this using SharePoint lists as the subscription store, and Power Automate as the automation tool, for a no-code solution to this requirement.


In order to implement this pattern it is necessary to have access to Power Automate and to SharePoint, both of which are available in Office 365. The custom connector described below uses the Power BI Rest API and the ExportTo function, which require a dedicated capacity (Premium) in Power BI to work. This pattern works with both interactive (pbix) and paginated reports. Paginated reports also require the use of a dedicated capacity. Data-driven subscriptions in SSRS were always an Enterprise feature on premises, so this requirement should come as no surprise.

Custom Connector

Currently, there are a number of actions available for Power BI within Power Automate. Unfortunately, none of these actions have the ability of rendering and saving a report, but that is something that the Power BI REST API can do. It is possible however to call this API using a custom connector in Power Automate.

Chris Webb recently put together a series of articles on using the Export function in the Power BI REST API with Power Automate. The first article outlines the process of creating the connector, as well as a downloadable Swagger (Open API) definition file that this pattern is based on. The second describes using it within Power Automate.

I won’t re-invent the wheel on the custom connector creation instructions here, just point you to the blogs above to create a connector. Once the custom connection is created, it will be possible to implement data-driven subscriptions.


Subscriptions can be stored just about anywhere, but for the purposes of this example, we’re going to use a SharePoint list. What we want is the ability to specify the title of a report, what format we want it rendered in, and the destination. The Custom connector will require the workspace ID and the Report ID of the report in Power BI, in addition to the output format. In addition, we want to be able to take advantage of parameters in paginated reports, so our subscription definition needs to contain a parameter value pair as well.

The following SharePoint Columns will be used in a custom list:

Column NameColumn TypeTitleSingle line of textWorkspace GUIDSingle line of textReport GUIDSingle line of textFile FormatChoiceDestination TypeChoiceDestinationSingle line of textParameterNameSingle line of textParameterValueSingle line of text

The choices for file format are the different output formats supported by the Export API. They are CSV, DOCX, IMAGE, MHTML, PDF, PNG, PPTX, XLSX, and XML. In my case I set the default to PDF as that is the most common format, but that choice is optional.

PowerAutomate supports a wide variety of file storage mechanisms, so the choices for destination type really depend on what destinations you want to support. In my case, I chose OneDrive for Business, SharePoint libraries, and email recipients. Therefore, one subscription could save to SharePoint while another delivers a file to an email user. These destinations will be reflected in the PowerAutomate flow created below.

Once the list is created, it can be populated with a few entries. In my example below, I am rendering reports from tyGraph for Twitter. Three are paginated reports going to each of the above destinations, and the last is an interactive (pbix) report being delivered to a SharePoint library.

The first three in the list are passing in a different parameter value to each report. Report parameters are not available to interactive reports, so these values are left empty for the interactive report.

The workspace GUID and the report GUID can be obtained by opening the report in a browser, and then inspecting the URL. This is true for both paginated and interactive reports.

Power Automate

Chris Webb’s post referenced above describes a pattern for rendering an export file from a Power Automate flow. We will use this within the pattern here.

The flow will iterate through the subscription list, and for each item found will render the report and save it to the desired output location. It can be created with any trigger, and for our purposes we are using the Recurrence trigger.

The first action in the flow is the SharePoint Get items action. Configure it to get all of the items from the subscription list created above.

We will need a name for the output file in multiple saving steps. It’s a good idea to create a variable for the output file name for ease of maintainability. We therefore initialize “Output File Name” as a String variable next.

We then create an “Apply to Each” Action from the control group and apply it to the “value” output from the “Get items” step above. This will iterate through each of our subscriptions.

Within the loop, we next apply the “Export to File” action from the custom connector created above. Instead of hardcoding the values however, we supply the values saved in the subscription. In addition, we pass in the parameter values taken from the subscription.

The same action can be used for both interactive and paginated reports. Interactive reports will simply ignore paginated specific options. Many options are available here, we are just utilizing a few of them. It should also be noted that this pattern only supports a single parameter/value pair. This is for simplicity’s sake, as the action will support multiple pairs.

It is also important to note that the settings of each of these custom actions must be changed to turn off the “Asynchronous Pattern” for the action. Without doing this, the action will fail at run time, even though it may test successfully when creating the custom connector.

In the next step, we set the value of the output file name variable that we set above. This will be called when we send the file to the destination.

In this case, we use the title, the current time, and the file format extension to create the file name. The exact formula is completely optional, but it’s a good idea to make the names unique to avoid overwriting past reports.

In the next step, we wait. Rendering takes some time, and one of the outputs above gives us an indication of how long we need to wait. In order to do so, we use the built in “Delay” action in Power Automate.

For the value of “Count” we select the “retry-after” output from the Export to file action above. It returns the number of seconds that the service estimates for the rendering of the report. This is just an estimate, and no guarantee, so it is possible that when we check on the status of the report, it will not be complete. Therefore, we need to repeat checks until it is. For that, we use a “Do Until” Action, available from the “Controls” section of the flow.

We check for the status of the report using the “Export Status” action of our custom connector. Therefore, we add this action into our loop and configure it appropriately, and tuning off the “Asynchronous Pattern” option as above. The “Export Status” action takes 3 arguments, the Workspace and Report GUIDs (that we get from the SharePoint list item) and exportId – which can be retrieved from the output of our “Export to File” action above as the “id” field.

The status reported as an output of this action will have 1 of 4 possible values: Succeeded, Failed, Running, or NotStarted . We want to continue checking as long as the status is neither “Successful” nor “Failed”. This is an advanced condition for the loop, so the Advanced option for it must be selected and the following code added:

@or(equals(body(‘Export_Status’)?[‘status’], ‘Succeeded’),equals(body(‘Export_Status’)?[‘status’], ‘Failed’))

Where Export_Status is the name of the action. Keep in mind that the language here is case sensitive.

The next action added is a condition where we inspect the value of the “status” output from the “Export Status” action. THe two conditions that we look for are Running or NotStarted. If either of these conditions are true, we need to wait for another estimated time interval. The entire loop will appear as below when configured.

Once the loop completes, we need to inspect the status field to see if it was successful, or if it failed. If it failed, we do nothing, but if it succeeds, we need to retrieve the report for storage in our destination. For this, we will add another condition AFTER the “Do Until” loop to inspect the status output.

Along the no branch, we add nothing, but if the output was successful, we retrieve the contents of the report with the “Get Export File” action of our custom connector. The “Get Export File” accepts the same arguments as the “Export Status” action, and has a single output – Body, which will contain the body of the report.

Once the body of the report has been retrieved, we need to send it to the destination. The destination will be determined from the “Destination Type” and “Destination” values from our subscription. For this, we use the “Switch” action from the Control section. In our case we have case branches for OneDrive for Business, SharePoint, and eMail. Fully configured, these branches appear as below.

Of course, your branches will reflect your possible destinations. The number of possible destinations is large and constantly evolving. In this way, this approach is much less constrained than the classic “data driven subscription” feature in SSRS which supported a fixed number of outputs.

Final Thoughts

While the classic Data Driven Subscriptions feature from SSRS Enterprise will likely not be returning, it is possible to recreate the capability with this approach. Its decoupled nature means that it is more flexible , allowing designer to add their own logic and destinations into the process.


Office 365 Graph API: download document from SharePoint library

Using Office 365 Graph API will allow you with few adaptations to download a document by using your favorite development language.

This tool very intuitive and easy to use is made to help you:

On the left side simply log with your Office 365 account to be ready to use the tool, don’t forget to add SharePoint sample to quick start your development.

First thing first you have to get the SharePoint site Id where your file is located, from this url:{host-name}:/{server-relative-path}Replace:

{host-name} with your SharePoint root site url in my case by “” {server-relative-path} with sites collection root site if you are not using your SharePoint root tenant site.

After clicking on “Run query” if you don’t make any mistake your will see your site informations:

site id.png

So my site collection id is “,b10d0761-01e2-47d0-b604-223322725e41,52596943-0f73-4f73-9695-2ac1961f1dfa

We will now use this id for the next actions, my based url will be:,b10d0761-01e2-47d0-b604-223322725e41,52596943-0f73-4f73-9695-2ac1961f1dfa

For a SharePoint site a Drive is a document library by using this url:{site-id}/drives

You will get a list of Drives with the following fields:

createdDateTime description id lastModifiedDateTime name webUrl driveType createdBy lastModifiedBy owner quota

drive id

We will store the “id” from the drive in order to get specific document by using id or document path from document library root url.

For drive url we will now used the following structure:{site-id}/drives/{drive-id}

In order to get detail from a specific document based on path use the following structure:{site-id}/drives/{drive-id}/root:/{item-path}

To get my file on my document library root:document.png

I will use this url:{site-id}/drives/{drive-id}/root:/document.docx

From the response:document url

use the field “@microsoft.graph.downloadUrl” in order to download the file, this link will allow you to get the file without any authentification token.

How to setup Microsoft Teams ? Comment installer Microsoft Teams

Article en anglais, puis plus bas en français

If you are an individual you can get teams for free following this tutorial. For companies, that will be covered in my next post.

This tool is good for you if you also want to share and collaborate documents with your team, use visio conference, chat, share notes on a whiteboard.

Unless Zoom or Slack, Microsoft Teams is fully integrated with Office Online allowing you to edit documents right inside the tool.

Paid version gives your user 1 TeraByte instead of 2GB.

Free version is limited to 300 users.

It also come with Office 365 other services, useful for companies.

Also you can’t schedule meetings inside Teams and record them. As a workaround you can send an email invitation to ask to meet at a certain time.

Image 358.png

Source : Microsoft

fr flag

Si vous êtes un particulier vous pouvez obtenir Microsoft Teams gratuitement en suivant ce tutoriel. Si vous êtes une entreprise, cela sera couvert dans le prochain article.

Cet outil est bien pour vous si vous souhaitez partager et collaborer sur le même document avec votre équipe, faire des visioconférences, chatter depuis l’ordinateur ou le téléphone, partager des notes sur un tableau blanc virtuel etc.

Comparé à Zoom ou Slack, Microsoft Teams est complément intégré avec la suite Office permettant d’éditer les documents directements dans Microsoft Teams. Si vous avez Office sur votre poste vous pouvez aussi ouvrir ces fichiers.

La version payante vous donne 1 TB de données plutôt que 2GB.

La version gratuite est limitée à 300 utilisateurs.

Lorsqu’on paie pour Teams, c’est via Office 365, qui est toute une suite de services utiles pour les entreprises. Ainsi son déploiement et sa gestion par le service informatique est aussi facilité.

Dans sa version gratuite, vous ne pourrez pas planifier de réunions visioconférence directement mais vous pouvez toujours envoyer une invitation email ou dire à vos collègue de démarrer la réunion à une heure donnée.

Image 358.png

Source : Microsoft

Dynamically loaded images in Power Apps are invisible

In this post i will look at dynamically loaded images and how these images may not appear in play mode.

Adding images to screens

In Power Apps studio it is easy to create in image control by selecting the Image control from the Media menu.

You can now Add you images by uploading them to the control or you can upload them within the media library of the app.

Dynamically loaded images in Power Apps are invisible 2

Now within the control you can select the image

Dynamically loaded images in Power Apps are invisible 3

That is easy.

Why create dynamically loaded images?

But how about if you want different images depending on your data. for example you want to show image1 when a number has been set to 1 and image 2 when a number has been set to 2.

Or maybe even show a red, amber or green image depending on a status value.

You might now create 3 image controls on top of each other and only make the one visible that you want. how about if you could just have one control and set the image to the image that you want.

Media URIs in Power Apps

Time to have a look at the way Power Apps references images.

I created an image and a label that displays imgImageControl.Image

Dynamically loaded images in Power Apps are invisible 4

The image uri will now be shown as the following uri in my label

appres://resources/Images1 Generating the Uri dynamically

Now I’ve added a label that holds a number in it. and I can use this number to generate the Uri using the expression:

“appres://resources/Image” & Number Dynamically loaded images in Power AppsImage visibility in play mode

When I now run the app in play mode I’m finding that my image doesn’t load! This is annoying.

Dynamically loaded images in Power Apps are invisible 5

Sp the power Apps player is doing different things than my Power apps development environment. That is not nice. You would expect that testing an app in the studio environment would be good enough , but it is not.

So when you get your users to use your app do make sure that you have tested the app in play mode first, as these dynamically loaded images may not appear.

I wonder how many other things there are that are loaded in this way as I have seen occasionally similar kind of issues.

Fixing to invisible images

I can see from my first label that the Uri of the image is set correctly however the image doesn’t appear.

How can i resolve this?

I added my images as separate controls on a separate screen. Then I reloaded my app and the image didn appear.

Why does this happen?

It looks like the app will load only those images that have specifically been used in the app. Some how Power Apps seems to ignore the images for which the name has been calculated.


PowerShell: How to develop interactive logs

In this article we will look for developing a logging mechanism for PowerShell automation components that can provide logs not just in form of a flat file but also as interactive html format. This best part of the story is we can design this html log format using whatever html/Css elements to make as intuitive as possible.

For the sake of clear understanding I am going to explain the complete process in various steps-

We need to download & install the “EnhancedHTML2” module.

In Step 1 we can see the command to download & install the module. We can use this command using PowerShell Console

Or we can download & install this module manually as shown in Step 2 from the PowerShell Gallery

In Step 3 we have added the code to query for all Lists in SharePoint Site, I keep it simple & straight for this demo

In Step 4 we have added code to build a collection keep list of SharePoint Lists using a PowerShell hash table as shown below-


In Step 5&6 we can see the implementation of “EnhancedHTML2” module

In Step 5 we will convert the data collection (Row & Column format) to “EnhancedHTMLFragment” by using “ConvertTo-EnhancedHTMLFragment” cmdlet. Similarly we can create any number of HTML Fragments as per our need

This cmdlet uses a switch “PreContent” that allows you to any HTML snippet before the fragment add to the HTML outcome

“-MakeTableDynamic” switch allows the cmdlet to embed basic html plumbing like Search, Pagination and so on to the HTML outcome

In Step 6 we will append all the HTML Fragments to build a single HTML file by using “ConvertTo-EnhancedHTML” cmdlet.

This cmdlet has got “HTMLFragments” parameter that will accept comma separated values of names of the Fragments that you want to include in the HTML document

Another important parameter “CssUri” accept the path of CSS file that will define the styling attributes for the HTML page

“Out-File” parameter defines the path to the save the HTML Log file


In Step 7 we will call the method to execute above steps


Step 8 the dlls that we need to include the script to make SharePoint Calls work


Step 9 shows the CSS file that we referred with “ConvertTo-EnhancedHTML” in Step 6


Step 10 shows the HTML outcome log file generated after we successfully executed the steps above


In Step 11 & 12 we can explore the html generated as part of this execution and it is amazing to see how this HTML is structured by adding references to the required JavaScript files and body elements.

It can be clearly noticed the generated html body content is in the form of table structure, this happened since we specified “-As Table” switch with “ConvertTo-EnhancedHTMLFragment” cmdlet in Step 5

Also we can notice the “<h1>” tag in the HTML body, it is the same content that we include to the fragment using “-PreContent” switch with “ConvertTo-EnhancedHTMLFragment” cmdlet in Step 5


And finally we can see the HTML based interactive logs in Step 13 shown below. This would be feature rich HTML Page and we can even further add more feature as per our needs, thought this would cost additional effort.


That is all for this demo.

Hope you find it helpful.

SharePoint Framework Application Customizer Cross-Site Page Loading

SharePoint Framework Application Customizer Cross-Site Page Loading

I suspect, like Elio Struyf and Velin Georgiev before him, we’re all suffering from PTSD trying to properly load an application customizer into modern pages. It all started with an issue posted in the sp-dev-docs repo that was about partial page load across site collections but devolved into and issue with the OnInit function firing multiple times. Velin’s post describing his solution to the issue starts with a masterful breakdown of the page loading cycle and his need to track page hits. Then, Elio’s variation highlights other things to check like what Hub Site do I belong to and what UI language does this page support and how those things might impact the transition with relation to the application customizer. This post is going to take that one step further and address page transition between sites where one site might include the application customizer and the other might not. This is all in reference to the Multilingual Pages solution that lives in the SP-Dev-Solutions repo. This is a 400 level blog post so I’m not going to reiterate what Velin and Elio already did in their posts. Instead, I encourage you to pause here and go read their posts and then come back to continue on the journey. No worries I’ll wait….

So now that you’re all caught up, I’ve included this gist with some numerical placeholders that I’ll comment more on below.

My navigation handler is very similar. That said because the application customizer could have been disposed of but the event handler still fires anyway due to what I believe is a timing issue, I need to not only check that the current page is changed but also that if the navigated event was unsubscribed. My best guess is that it takes time to unregister the navigation event and so there’s an asynchronous timing issue an the event is fired anyway. Functions to remove the application customizer from the placeholder Render method starts by determining if the context on the page is undefined. That it could be (and believe me it happens repeatedly), seems like a bug. If the context isn’t defined, then we re-trigger the navigationEventHandler which waits another 50 MS hoping the context has gotten populated. Once the context is valid, then we verify the navigation event handler is set and we render the component. This is the secret sauce. Here we determine if the location we’re going to is a site that has the application customizer installed on it. Finally, assuming the application customizer is installed then we’re going to identify if the container for our application customizer is available (if not create it) and then render our component. If the application customizer is not installed, then we remove it from the DOM.

I truly hope this helps others out there that are struggling with their application customizers. Happy Coding!

Stop using ClientID and Secret to access your Office 365 services

So you are building an application. When following most of the guidelines you find online they will tell you that you will need to use an Azure AD App Registration. This Azure App registration will provide you access to an Office 365 service like the Microsoft Graph, SharePoint, Exchange,… This can either be delegated or application permissions. This post is about application permissions.

The problem

For application permissions (app only), you need a client ID and a secret. And to access SharePoint you even need to create a certificate and upload that to your Azure AD app registration. I’ve done this plenty of times and if you want to know more about how that works there is an excellent blog post from Bert Jansen. While this works great, it is a cumbersome procedure. You need to store the certificate or secret somewhere. You need to refresh it from time to time. So there must be a better solution.

Managed Identity to the rescue

Managed identities have existed for a while now in Azure. If you haven’t heard of it. The simplest comparison is that for example, Microsoft will convert the web app running your code into a known “App” like identity. It will create a Service Principal and link that to your web app. It won’t show it in the UI but it’s there in the background, it’s like a special type of Azure AD App registration. The cool part is that they create a certificate and link it to that registration. Azure will then refresh that certificate if needed and nobody will be able to download it. Increasing security in your environment. It’s a little more nuanced than that because you can create 2 kinds of Managed Identities, a System-assigned and a User-assigned.

System-assigned managed identity

So if you go to the identity tab in your web app in Azure you can select to create a System-Assigned managed identity from it. It’s as simple as flipping the switch. (There are also user-assigned managed identities but we are not going deeper in those)

Enabling System Assigned managed identity for an Azure Web App. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it’s enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.

As you see, you are getting an object id from the service principal that was created. This special type of App registration will not show up the UI but we can access it with the Azure CLI. With the Azure CLI we can also give it access to for instance SharePoint with App-Only credentials. To do that you will need the correct GUID identifying the SharePoint API. Every API you can access in Azure also has an identity. I’ve created a reference table here to make it easier. Chances are that the one you need has a ‘special’ GUID.

Ok, now show some code

Let’s say I needed to have App-Only credentials to connect to SharePoint Online. In that case, I would need to give my Service Principal access to SharePoint. Open up your Cloud Shell and execute the following script.

This will give you all the App-Only Permissions available for SharePoint Online. Now let’s say I want to update all sites in my tenant (for example adding a field). So I would need to give the Managed Identity of my web app access to the SPO API with ‘Sites.FullControl.All’ permissions.

The configuration is now complete. Time to add this in code. If you want to authenticate with Managed Identity you will need to add the Nuget package “Microsoft.Azure.Services.AppAuthentication” and to get your authentication token you only need to add 2 lines of code.

var azureServiceTokenProvider = new AzureServiceTokenProvider(); string accessToken = await azureServiceTokenProvider .GetAccessTokenAsync(“https://<yourTenant>”);

Don’t forget to change the URL to your own Tenant. This will provide us with a bearer token that we can use in CSOM to connect to SharePoint. Now to show you what the difference is when using Managed Identity I also log the current user logged in.

As you can see from the output. When I run the code on my dev machine it’s my user that’s logged in. If I run the code inside of my Azure Web App you get the SharePoint System account name returned. For this, I also added User.Read.All permissions to my App-Only permission list

Updating from VS on my dev machineUpdating from Azure Web App with Managed Identity Activated.Conclusion: The good and the bad

Now as you can imagine this method has benefits and drawbacks. On the positive side:

No more managing secrets and/or certificatesThe dev only log’s in with his credentials and the production code uses the App-Only credentials.If the Web App gets deleted then the Managed Identity is automatically removedThis works on Web Apps and Azure FunctionsWorks in C# or Node.JS

But on the negative side: How do we countermeasure if things go wrong? It’s been a statement since the first computer was built, “It works on my machine”. If the dev only has access to a handful of site collections, and those execute normally, but there is a site that is just a little different and that throws an exception, then there is no way (besides decent logging) that you can find out what the problem is. Therefore we can add Azure KeyVault in the middle. But this would again add an additional component to manage

How to Add a Custom Tile to App Launcher in SharePoint Online?

Requirement: Add a custom Tile to the SharePoint Online app launcher.

How to Customize the SharePoint Online App Launcher?

The app launcher in Office 365 is a grid icon in the top, right corner of the screen that lets you quickly access programs or sites. You can add other applications or sites to the Office 365 App tiles. Here is how:

Login to the Office 365 admin center as a “Global Administrator”. Click on Settings in the left navigation >> Org Settings >> Click on “Organization profile” tab Click on “Custom app launcher tiles” and then click on “Add a  custom tile” link in the custom app launcher tiles pane. Enter a Name, URL, Description, and Image URL. The image must be 50×50 pixel, and must be uploaded to locations such as SharePoint Online library and shared with everyone. In my case, I’ve uploaded an Icon to the “Site Assets” library of the intranet portal.sharepoint online customize app launcher Click Submit.

This takes sometime to reflect. For me, it took 30 minutes. You’ll get a notification once its updated. Also this feature needs Exchange Online mailbox assigned to you and you’ve signed into your mailbox at least once.

The custom tile appears under the app launcher’s All apps. We’ve lost the ability to pin custom tiles to the app launcher, unfortunately!

add custom tile to sharepoint online app launcher.png

New CodeTour Option for SharePoint Framework (SPFx) Upgrades in the Office 365 CLI

2 minute read

I tried out the relatively new CodeTour option in the Office 365 CLI today to upgrade a SharePoint Framework (SPFx) Web Part solution. I originally built the Web Part to work in SharePoint 2019, so the underlying SPFx version was 1.4.1. I wanted to get it “up to rev” to 1.10.0 to put a cleaned up version of it into the pnp/sp-dev-fx-webparts repo of SPFx samples.

The first thing I needed to do was install the CodeTour extension for VSCode, which you can download from the marketplace.

Next, I made sure I had the latest version of the Office 365 CLI by running:

You’ll need version 2.1.0 or greater. The line above will install the Office 365 CLI globally, which is probably what you want.

To get the tour started (and who doesn’t like a good tour?), I simply ran the following:

o365 spfx project upgrade –output tour

This created a tour in the CodeTour section in the left rail in VS Code, which looks something like this:

To start the tour, I just clicked on the Upgrade project… row, which expanded out all the steps I needed to complete the upgrade.

Because I was going all the way from SPFx 1.4.1 to 1.10.1, there were 44 steps. That’s more that you would generally see, since it’s such a big jump.

The great thing about this is each step does the following:

Highlights the row of code or parameter which needs to change. You see the exact context for the change.Provides some information about what the step will do. I expect these messages will improve over time.Gives you a link to click to accomplish the step if it’s possible to automate it. If it’s not possible to automate it, you get instructions about what to do yourself.

In the past, when I used the spfx project upgrade command, I’d scroll all the way down to the output, copy the commands out, and just paste them into the terminal. I didn’t really take the time to understand what specifically was happening because I didn’t need to. (I trust Waldek Mastykarz (@waldekm) and Gary Trinder (@garrytrinder), who manage the Office 365 CLI project. Maybe I trust them too much?)

I like the fact that the CodeTour approach lets me pay a bit more attention, without spending much more time to get the work done. If you’re interested in understanding more about the internal workings of SFPx, this is a nice middle ground: it gets the upgrade done pretty painlessly, but you can follow along and learn some things.

Give it a try and let me know what you think in the comments…

I was remiss in the initial version of this post in not giving Hugo Bernier (@bernierh) credit for his work on the CodeTour updates.


Most Popular Today

Resignation Letter Templatesfor McDonalds Workers

Do you need to write a resignation letter? Here are some of the best resignation letter examples and templates for a variety of circumstances...