Quantcast
Channel: ASP.NET Blog
Viewing all 311 articles
Browse latest View live

Development time IIS support for ASP.NET Core Applications

$
0
0

With a recent update to Visual Studio 2017, we have added support for debugging ASP.NET Core applications against IIS. This blog post will walk you through enabling this feature and setting up your project to use this feature.

Getting Started

To get started:

  • You need to install Visual Studio 2017 (version 15.3) Preview (it will not work with any earlier version of Visual Studio)
  • You must have the ASP.NET and web development workload OR the .NET Core cross-platform development workload installed

Enable IIS

Before you can enable Development time IIS support in Visual Studio, you will need to enable IIS. You can do this by selecting the Internet Information Services checkbox in the Turn Windows features on or off dialog.

If your IIS installation requires a reboot, make sure to complete it before proceeding to the next step.

Development time IIS support

Once you’ve installed IIS, you can launch the Visual Studio installer to modify your existing Visual Studio installation. In the installer select the Development time IIS support component which is listed as optional component under the ASP.NET and web development workload. This will install the ASP.NET Core Module which is a native IIS module required to run ASP.NET Core applications on IIS.

Adding support to an existing project

You can now create a new launch profile to add Development time IIS support . Make sure to select IIS from the Launch dropdown in the Debug tab of Project properties of your existing ASP.NET Core application

Alternativately, you can add manually a launch profile to your launchSettings.json file.

{
    "iisSettings": {
        "windowsAuthentication": false,
        "anonymousAuthentication": true,
        "iis": {
            "applicationUrl": "http://localhost/WebApplication2",
            "sslPort": 0
        }
    },
    "profiles": {
        "IIS": {
            "commandName": "IIS",
            "launchBrowser": "true",
            "launchUrl": "http://localhost/WebApplication2",
            "environmentVariables": {
                "ASPNETCORE_ENVIRONMENT": "Development"
            }
        }
    }
}

Congratulations! At this point, your project is all set up for development time IIS support. You may be prompted to restart Visual Studio if weren’t running as an administrator.

Conclusion

We look forward to you trying out this feature and let us know how it works for you. You can do that below, or via twitter @AndrewBrianHall and @sshirhatti.


Announcing ASP.NET Core 2.0

$
0
0

The ASP.NET team is proud to announce general availability of ASP.NET Core 2.0.  This release features compatibility with .NET Core 2.0, tooling support in Visual Studio 2017 version 15.3, and the new Razor Pages user-interface design paradigm.  For a full list of updates, you can read the release notes.  The latest SDK and tools can be downloaded from https://dot.net/core. Read the .NET Core 2.0 release announcement for more information and watch the launch video:

 

With the ASP.NET Core 2.0 release we’ve added many new features to make building and monitoring web apps easier and we’ve worked hard to improve performance even more.

Updating a Project to ASP.NET Core 2.0

ASP.NET Core 2.0 runs on both .NET Framework 4.6.1 and .NET Core 2.0, so you will need to update your target framework in your project to netcoreapp2.0 if you were previously targeting a 1.x version of .NET Core.

Figure 1 - Setting Target Framework in Visual Studio 2017

Figure 1 – Setting Target Framework in Visual Studio 2017

Next, we recommend you reference the new Microsoft.AspNetCore.All metapackage instead of the collection of individual Microsoft.AspNetCore.* packages that you previously used.  This new metapackage contains references to all of the AspNetCore packages and maintains a complete line-up of compatible packages.  You can still include explicit references to specific Microsoft.AspNetCore.* package versions if you need one that is outside of the lineup, but our goal is to make this as simple a reference as possible.

What happens at publication time?  We know that you don’t want to publish the entire AspNetCore framework to your target environments, so the publish task now distributes only those libraries that you reference in your code.  This tree-pruning step should help make your publish process much smoother and easier to distribute your web applications.

More information about the features and changes you will need to address when migrating from ASP.NET Core 1.x to 2.0 can be found in our documentation.

Introducing Razor Pages

With this release of ASP.NET Core, we are introducing a new coding paradigm that makes writing page-focused scenarios easier and simpler than our current Model-View-Controller architecture.  Razor Pages are a page-first structure that allow you to focus on the user-interface and simplify the server-side experience by writing PageModel objects.

If you are familiar with how to configure your ASP.NET Core Startup class for MVC, then you already have the following lines in your Startup class:

Surprise!  The AddMvc and UseMvc configuration calls in your Startup class also activate the Razor Pages feature.  You can start writing a Razor Page by placing a new cshtml file called Now.cshtml in the Pages/ top-level folder of your application.  Let’s look at a simple page that shows the current time:

This looks like a standard MVC View written in Razor, but also has the @page directive at the top to indicate that this is a stand-alone Razor Page built with that paradigm.  HtmlHelpers, TagHelpers, and other .NET Code is available to us in the course of the page.  We can add methods just as we could in Razor views, by adding a block level element called @functions and writing methods into that element:

We can build more complex structures by taking advantage of the new PageModel object.  The PageModel is an MVVM architectural concept that allows you to execute methods and bind properties to the Page content that is being rendered.  We can enhance our sample by creating a NowModel.cshtml.cs C# class file in the Pages folder with this content:

With this class that inherits from PageModel, we can now do more complex interactions and build out a class that can be unit tested.  In this case, we are simply loading the LastModified property on the Now page and setting that to the LastModified property.  Also note the OnGet method to indicate that this PageModel handles the HTTP GET verb.  We can update our Razor Page with the following syntax to start using the PageModel and output the last update date:

For more information, check out the ASP.NET Core documentation on getting started with Razor Pages.

Updated Templates and SPA Templates

The templates that ship with ASP.NET Core have been enhanced to not only include a web application that is built with the MVC pattern, but also a razor pages web application template, and a series of templates that enable you to build single-page-applications (SPA) for the browser.  These SPA templates use the JavaScript Services functionality to embed NodeJS within ASP.NET Core on the server, and compile the JavaScript applications server-side as part of the .NET build process.

New ASP.NET Core Templates in Visual Studio 2017

Figure 2 – New ASP.NET Core Templates in Visual Studio 2017

These same templates are also available out of the box at the command-line when you type dotnet new:

Figure 3 – Templates available with the dotnet new command

DbContext Pooling with Entity Framework Core 2.0

Many ASP.NET Core applications can now obtain a performance boost by configuring the service registration of their DbContext types to use a pool of pre-created instances, avoiding the cost of creating new instance for every request.  Try adding the following code to your Startup/ConfigureServices to enable DbContext pooling:

You can read more information about the updates included in Entity Framework Core 2.0 online in their announcement post.

Monitor and Profile with No Code Changes and Application Insights

ASP.NET Core 2.0 runs with no modifications necessary on Azure App Service and comes with integrations that provide performance profiling, error reporting, and diagnostics from Azure Application Insights. In Visual Studio 2017, right-click on your project and choose “Add – Application Insights Telemetry” to start collecting data from your application. You can then review the performance of your application including all log messages directly within Visual Studio 2017.

Telemetry Reported in Visual Studio 2017

Figure 4 – Telemetry Reported in Visual Studio 2017

That’s nice when you’re developing your application, but what if your application is already in Azure?  We’ve got support in the Azure portal to start profiling and debugging, and it starts when you first publish your application and navigate to the cloud portal for your new app service.  Azure will prompt you with a new purple banner indicating that Application Insights for monitoring and profiling is available

Banner in Azure Portal offering to assist in configuring Application Insights

Figure 5 – Banner in Azure Portal offering to assist in configuring Application Insights

When you click through that banner, you will create an Application Insights service for your application and attach those features without recompiling or redeploying.  Shortly afterwards, your new Application Insights service will start reporting data about the activity captured.

Initial Application Insights Overview on Azure Portal

Figure 6 – Initial Application Insights Overview on Azure Portal

It even shows the number of failed requests and errors in the application.  If you click through that area, you’ll see details about the failed requests:

Failed Requests Report on Azure Portal

Figure 7 – Failed Requests Report on Azure Portal

There is a System.Exception thrown, and identified at the bottom of the screen, if we click through that reported exception, we can see more about each time that exception was thrown.  When you click through a single instance of those exceptions, you get some neat information about the exception including the call stack:

Exception Analysis in the Azure Portal

Figure 8 – Exception Analysis in the Azure Portal

Snapshot debugging in Application Insights now supports ASP.NET Core 2.0.  If you configure snapshot debugging in your application, then the “Open Debug Snapshot” link at the top will appear and show the complete call stack and you can click through method calls in the stack to review the local variables:

Stack Trace in Azure Portal

Figure 9 – Stack Trace in Azure Portal

Local values reported in Azure Portal

Figure 10 – Local values reported in Azure Portal

 

Nice!  We can go one step further and click that “Download Snapshot” button in the top corner to start a debug session in Visual Studio right at the point this exception was thrown.

What about the performance of these pages?  From the Application Insights blade, you can choose the Performance option on the left and dig deeper into the performance of each request to your application.

Application Profiling in Azure Portal

Figure 11 – Application Profiling in Azure Portal

There are more details available in our docs about performance profiling using Application Insights.

If you want the raw logs about your application, you can enable the Diagnostic Logs in App Service and set the diagnostic level to Warning or Error to see this exception get thrown.

Configure Logging within the Azure Portal

Figure 12 – Configure Logging within the Azure Portal

Finally, choose the Log Stream on the left and you can watch the same console that you would have on your developer workstation.  The errors and log messages of the selected severity level or greater will appear as they are triggered in Azure.

Live Console Logging inside the Azure Portal

Figure 13 – Live Console Logging inside the Azure Portal

All of the Application Insights features can be activated in ASP.NET Core without rebuilding and deploying.  Snapshot debugging requires an extra step and some code to be added, and the configuration is as simple as an extra NuGet package and a line in your Startup class.

You can learn more about Application Insights Telemetry in our online documentation.

Razor Support for C# 7.1

The Razor engine has been updated to work with the new Roslyn compiler and that includes support for C# 7.1 features like Default Expressions, Inferred Tuple Names, and Pattern-Matching with Generics.  To use C #7.1 features in your project add the following property in your project file and then reload the solution:

<LangVersion>latest</LangVersion>

C# 7.1 is itself in a preview state, and you can review the language specification for these features on their GitHub repository.

Simplified Application Host Configuration

Host configuration has been dramatically simplified, with a new WebHost.CreateDefaultBuilder included in the default ASP.NET Core templates that automatically allocates a Kestrel server that will attempt to run on IIS if it is available, and configures the standard console logging providers.  Your Program.cs file is simplified to only this content:

That reduces the possibility of accidentally breaking some of the standard configuration that most developers were not altering in their ASP.NET Core applications.  Why make you write the same boilerplate code over and over, when it could be simplified to 3 lines of code?

Summary

These updates in ASP.NET Core 2.0 provide new ways to write your applications and simplify some of the operational process of managing a production application.  A word of thanks to our .NET Community for their feedback, issues, and contributed source code on GitHub.  They’ve really been a huge help in delivering this new version of ASP.NET Core. We encourage you to download the latest .NET Core SDK from https://dot.net/core and start working with this new version of ASP.NET Core.  You can watch the launch video for .NET Core 2.0 and ASP.NET Core 2.0 at: https://aka.ms/dotnetcore2launchvideo

ASP.NET Core 2.0 Features

$
0
0

Last week we announced the release of ASP.NET Core 2.0 and described some top new features, including Razor Pages, new and updated templates, and Application Insights integration. In this blog post we are going to dig into more details of features in 2.0. This list is not exhaustive or in any particular order, but highlights a number of interesting and important features.

ASP.NET Core Metapackage/Runtime Store

ASP.NET Core has had a goal of allowing developers to only depend on what they need. While people generally appreciate the increased modularity, it came with an increased cost for everyone in the form of finding and depending on a relatively large set of smaller packages. In 2.0 we have improved this story with the introduction of the Microsoft.AspNetCore.All package.

Microsoft.AspNetCore.All is a metapackage, meaning it only has references to other packages, and it references:

  1. All ASP.NET Core packages.
  2. All Entity Framework Core packages.
  3. All dependencies used by ASP.NET Core and Entity Framework Core

The version number of the Microsoft.AspNetCore.All metapackage represents the ASP.NET Core version and Entity Framework Core version (aligned with the .NET Core version). This makes it easy to depend on a single package and version number that gives you all available APIs and lines up the entire stack.

Applications that use the Microsoft.AspNetCore.All metapackage automatically take advantage of the .NET Core Runtime Package Store. The runtime store contains all the runtime assets needed to run ASP.NET Core 2.x applications on .NET Core. When you use the Microsoft.AspNetCore.All metapackage, no assets from the referenced ASP.NET Core packages are deployed with the application — the .NET Core Runtime Store already contains these assets. The assets in the runtime store are precompiled to improve application startup time. This means that by default if you are using the Microsoft.AspNetCore.All metapackage your published app size will be smaller and your apps will startup faster.

You can read more about the .NET Core Runtime Package Store here: https://docs.microsoft.com/en-us/dotnet/core/deploying/runtime-store

WebHost builder APIs

The WebHost builder APIs are static methods in the Microsoft.AspNetCore package that provide several ways of creating and starting a WebHost with a default configuration. These methods reduce the common code that the majority of ASP.NET Core applications are going to need. For example. you can use the default web host builder like this:

The above code is taken from the MVC template in Visual Studio, and shows using the CreateDefaultBuilder method to construct a WebHost. You can see the configuration that CreateDefaultBuilder uses here.

The Program.BuildWebHost method is provided by convention so that tools, like EF migrations, can inspect the WebHost for the app without starting it. You should not do anything in the BuildWebHost method other than building the WebHost. We used an expression-bodied method in the templates to help indicate that this method shouldn’t be used for anything other than creating an IWebHost.

In addition to CreateDefaultBuilder there are a number of Start methods that can be use to create and start a WebHost:

These methods provide a way to run an application in a single line of code, but more importantly they provide a way to quickly get a web server responding to requests and executing your code without any ceremony or extra configuration.

Configuration as a Core service

As we watched developers build applications with 1.x, and listened to the feedback and feature requests being made, it become obvious that there are several services that should always be available in ASP.NET Core. Namely, IConfiguration, ILogger (and ILoggerFactory), and IHostingEnvironment. To that end in 2.0 an IConfiguration object will now always be added to the IoC container, this means that you can accept IConfiguration in your controller or other types activated with DI just like you can with ILogger and IHostingEnvironment.

We also added a WebHostBuilderContext to WebHostBuilder. WebHostBuilderContext allows these services to be configured earlier, and be available in more places:

The WebHostBuilderContext is available while you are building the WebHost and gives access to an IHostingEnvironment and IConfiguration object.

Logging Changes

There are three main differences in the way that Logging can be used in 2.0:

  1. Providers can be registered and picked up from DI instead of being registered with ILoggerFactory, allowing them to consume other services easily.
  2. It is now idiomatic to configure Logging in your Program.cs. This is partly for the same reason that configuration is now a core service: you need Logging to be available everywhere, which means it should be configured early.
  3. The log filtering feature that was previously implemented by a wrapping LoggerFactory is now a feature of the default LoggerFactory, and is wired up to the registered configuration object. This means that all log messages can be run through filters, and they can all be configured via configuration.

In practice what these changes mean is that instead of accepting an ILoggerFactory in your Configure method in Startup.cs you will write code like the following:

Kestrel Hardening

The Kestrel web server has new features that make it more suitable as an Internet-facing server. We’ve added a number of server constraint configuration options in the KestrelServerOptions class’s new Limits property. You can now add limits for the following:

  • Maximum client connections
  • Maximum request body size
  • Minimum request body data rate

WebListener Rename

The packages Microsoft.AspNetCore.Server.WebListener and Microsoft.Net.Http.Server have been merged into a new package Microsoft.AspNetCore.Server.HttpSys. The namespaces have been updated to match.

For more information, see Introduction to Http.sys.

Automatic Page and View compilation on publish

Razor page and view compilation is enabled during publish by default, reducing the publish output size and application startup time. This means that your razor pages and views will get published with your app as a compiled assembly instead of being published as .cshtml source files that get compiled at runtime. If you want to disable view pre-compilation, then you can set a property in your csproj like this:

If you do this you will have .cshtml files in your published output again, as well as all the reference assemblies that might be required to compile those files at runtime.

Tag Helper components

Tag helper components are responsible for generating or modifying a specific piece of HTML. They are registered as services and optionally executed by TagHelpers. For example, the new HeadTagHelper and BodyTagHelper will run all the registered tag helper components, so they can modify the head or body of page being rendered.
This makes a Tag helper component useful for tasks such as dynamically adding a script to all your pages. This is how we enable the Application Insights support in ASP.NET Core. The UserApplicationInsights method registers a tag helper component that is executed by the HeadTagHelper to inject the Application Insights JavaScript, and because it is done via DI we can be sure that it is only registered once, avoiding duplicate JavaScript being added.

IHostedServices

If you register an IHostedService then ASP.NET Core will call the Start() and Stop() methods of your type during application start and stop respectively. Specifically, start is called after the server has started and IApplicationLifetime.ApplicationStarted is triggered.

Today the we only use hosted services in SignalR, but we have discussed using it for things like:

  • An implementation of QueueBackgroundWorkItem that allows a task to be executed on a background thread
  • Processing messages from a message queue in the background of a web app while sharing common services such as ILogger

IHostingStartup

If you read the ASP.NET Core 2.0 announcement blog post then you will have seen a lot of talk about automatic light-up of Application Insights. When you publish to an Azure App Service and enable Application Insights you get log messages and other telemetry “for free”, meaning that you don’t have to add any code to your application to make it work. This automatic light-up feature is possible because of the IHostingStartup interface and the associated logic in the ASP.NET Core hosting layer.
The IHostingStartup interface defines a single method: void Configure(IWebHostBuilder builder);. This method is called while the WebHost is being built in the Program.cs of your ASP.NET Core application and allows code to setup anything that can be configured on a WebHostBuidler, including default services and loggers which is how Application Insights works. ASP.NET Core will execute any IHostingStartup implementations in the applications assembly as well as any that are listed in an Environment Variable called ASPNETCORE_HOSTINGSTARTUPASSEMBLIES.

Improved TempData support

We’ve made a couple of improvements to the TempData feature in this release:

  • The cookie TempData provider is now the default TempData provider. This means you no longer need to setup session support to make use of the TempData features
  • You can now attribute properties on your controllers and page models with the TempDataAttribute to indicate that the property should be backed by TempData. Set the property to add a value to TempData, or read from the property to read from TempData.

Media type suffixes

ASP.NET Core MVC now supports media type suffixes (ex. application/foo+json). The JSON and XML formatters have been updated to support the json and xml suffixes respectfully. For example, the JSON formatters now support parsing requests of the form Content-Type: application/*+json (with any parameters), and support formatting responses when your ObjectResult’s ContentTypes or Response.ContentType value are of the form application/*+json. We’ve extended the MediaType API to add SubtypeSuffix and SubtypeWithoutSuffix properties, and you can use wildcard patterns to indicate support for media type suffixes on your own formatters.

Summary

As you can see there is a lot of new stuff in ASP.NET Core 2.0. We hope you enjoy trying out these new features. Download .NET Core 2.0 today and let us know what you think!

Getting Started with Windows Containers

$
0
0

Containers provide a way of running an application in a controlled environment, isolated from other applications running on the machine, and from the underlying infrastructure. They are a cost-effective way of abstracting away the machine, ensuring that the application runs in the same conditions, from development, to test, to production.

Containers started in Linux, as a virtualization method at the OS level that creates the perception of a fully isolated and independent OS, but it does not require creating a full virtual machine. People have been already using Linux containers for a while. Docker greatly simplified the containerization on Linux by offering a set of tools that make it easy to create, deploy and run applications by using containers.

Windows Server implements the container technology, and Docker API’s and tool-set are extended to support Windows Containers, offering developers who use Linux Docker the same experience on Windows Server.

There are two kinds of container images available: Windows Server Core and Nano Server. Nano Server is lightweight and only for x64 apps. Windows Server Core image is larger and has more capabilities; it allows running “Full” .NET Framework apps, such as an ASP.NET application, in containers. The higher compatibility makes it more suitable as a first step in transitioning to containers. ASP.NET Core on .NET Core apps can run on both Nano Server and Server Core, but are better suited for running on Nano Server, because of its smaller size.

The following steps show how to get started on running ASP.NET Core and ASP.NET applications on Windows containers.

Prerequisites:

Install Docker

Install Docker for Windows – Stable channel

After installing Docker, logging out of Windows and re-login is required. Docker may prompt for that. After logging in again, Docker starts automatically.

Switch Docker to use Windows Containers

By default, Docker is set to use Linux containers. Right-click on the docker tray icon and select “Switch to Windows Containers”.

Switch to Windows Containers

Switch to Windows Containers

Running docker version will show Server OS/arch changed to Windows after docker switched to Windows containers.

Docker version before switching to Windows containers

Docker version before switching to Windows containers

Docker version after switching to Windows Containers

Docker version after switching to Windows Containers

Set up an ASP.NET or ASP.NET Core application to run in containers

ASP.NET as well as ASP.NET Core applications can be run in containers. As mentioned above, there are two kinds of container images available for Windows: Nano Server and Server Core containers. ASP.NET Core apps are lightweight enough that they can run in Nano Server containers. ASP.NET apps need more capabilities and require Server Core containers.

The following walkthrough shows the steps needed to run an ASP.NET Core and an ASP.NET application in a Windows Container. To start, create an ASP.NET or ASP.NET Core Web application, or use an existing one.

Note: ASP.NET Core applications developed in Visual Studio can have Docker support automatically added using Visual Studio Tools for Docker. Until recently, Visual Studio Tools for Docker only supported Linux Docker scenarios, but in Visual Studio 2017 version 15.3, support has been added for containerizing ASP.NET Core apps as Windows Nano images. Docker support with Windows Nano Server can be added at project creation time by checking the “Enable Docker Support” checkbox and selecting Windows in the OS dropdown, or it can be added later on by right-clicking on the project in Solution Explorer, then Add -> Docker Support.

This tutorial assumes that “Docker Support” was not checked when the project was created in Visual Studio, so that the whole process of adding Docker support manually can be explained.

Publish the App

The first step is to put together in one folder all the application artifacts needed for the application to run in the container. This can be done with the publish command. For ASP.NET Core, run the following command from the project directory, which will publish the app for Release config in a folder; here it is named PublishOutput.

dotnet publish -c Release -o PublishOutput

dotnet Publish Output
Or use the Visual Studio UI to publish to a folder (for ASP.NET or ASP.NET Core)

Publish with Visual Studio

Publish with Visual Studio

Create the Dockerfile

To build a container image, Docker requires a file with the name “Dockerfile” which contains all the commands, in order, to build a given image. Docker Hub contains base images for ASP.NET and ASP.NET Core.

Create a Dockerfile with the content shown below and place it in the project folder.

Dockerfile for ASP.NET Core application (use microsoft/aspnetcore base image)

FROM microsoft/aspnetcore:1.1
COPY ./PublishOutput/ ./
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

The instruction FROM microsoft/aspnetcore:1.1 gets the microsoft/aspnetcore image with tag 1.1 from dockerhub. The tag is multi-arch, meaning that docker figures out whether to use the Linux or Nano Server container image depending on what container mode is set. You can use as well the specific tag of the image: FROM microsoft/aspnetcore:1.1.2-nanoserver
The next instruction copies the content of the PublishOutput folder into the destination container, and the last one uses the ENTRYPOINT instruction to configure the container to run an executable: the first argument to ENTRYPOINT is the executable name, and the second one is the argument passed to the executable.

If you publish to a different location, you need to edit the dockerfile, so to avoid this, you can copy the content of the current folder into the destination container, as in the dockerfile below. The dockerfile needs to be added to the published output.

FROM microsoft/aspnetcore:1.1
COPY . .
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

Dockerfile for ASP.NET application (use microsoft/aspnet base image)

FROM microsoft/aspnet
COPY ./PublishOutput/ /inetpub/wwwroot

An entry point does not need to be specified in the ASP.NET dockerfile, because the entry point is IIS, and this is configured in the microsoft/aspnet base image.

Build the image

Run docker build command in the project directory to create the container image for the ASP.NET Core app.

docker build -t myaspnetcoreapp .

Build Your Application Image

Build Your Application Image

The argument -t is for tagging the image with a name. Running the docker build command will cause pulling the ASP.NET Core base image from Docker Hub. Docker images consist of multiple layers. In the example above, there are ten layers that make the ASP.NET Core image.

The docker build command for ASP.NET will take significantly longer compared with ASP.NET Core, because the images that need to be downloaded are larger. If the image was previously downloaded, docker will use the cached image.

After the container image is created, you can run docker images to display the list and size of the container images that exist on the machine. The following is the image for the ASP.NET (Full Framework):

ASP.NET Full Framework Image

And this is the image for the ASP.NET Core:

ASP.NET Core Image

Note in the images above the differences in size for the ASP.NET vs ASP.NET Core containers: the image size for the ASP.NET container is 11.6GB, and the image size for the ASP.NET Core container is about ten times smaller.

Run the container

The command docker run will run the application in the container:

docker run -d -p 80:80 myaspnetcoreapp

Docker Run ResultsThe -d argument tells Docker to start the image in the detached mode (disconnected from the current shell).

The -p argument maps the container port to the host port.

The ASP.NET app does not need the -p argument when running because the microsoft/aspnet image has already configured the container to listen on port 80 and expose it.

The docker ps command shows the running containers:

Docker ps ResultsTo give the running container a name and avoid getting an automatically assigned one, use the --name argument when with the run command:

docker run -d --name myapp myaspnetcoreapp

This name can be used instead of the container ID in most docker commands.

View the web page running in a browser

Due to a bug that affects the way Windows talks to containers via NAT (https://github.com/Microsoft/Virtualization-Documentation/issues/181#issuecomment-252671828) you cannot access the app by browsing to http://localhost. To work around this issue, the internal IP address of the container must be used.

The address of the running Windows container can be obtained with:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" <first chars of HASH>

Docker Inspect Results

Where HASH is the container ID; the name of the running container can be used instead.

Then type the URL returned into your browser: http://172.25.199.213:80 and you will see the site running.

Note that the limitation mentioned above only applies when accessing the container from localhost. Users from other machines, or other VM’s or containers running on the host, can access the container using the host’s IP and port.

Wrap up

The steps above show a simple approach for adding docker support for ASP.NET Core and ASP.NET applications.

For ASP.NET Core, in addition to the base images that help build the docker container which runs the application, there are docker images available that help compile/publish the application inside the container, so the compile/publish steps can be moved inside the Dockerfile. The Dockerfile can use several base images, each in different stages of execution. This is known as “multi-stage” build. A multi-stage build for ASP.NET Core uses the base image microsoft/aspnetcore-build  such as in this github sample: https://github.com/dotnet/dotnet-docker-samples/blob/master/aspnetapp/Dockerfile

Resources to help getting started with Windows containers:

Welcome to the New Blog Template for ASP.NET Developers

$
0
0

By Juliet Daniel, Lucas Isaza, and Uma Lakshminarayan

Have you always wanted to build a blog or other web application but haven’t had the time or educational resources to learn? With our blog template, available in our GitHub repo, you can create your web application fast and effortlessly, and even learn to master the new Razor Pages architecture along the way.

This blog post will explore how to use Razor Pages features and best practices and walk through the blog template code that we wrote.

This summer we had the awesome opportunity to be part of Microsoft’s Explore Program, a 12-week internship for rising college sophomores and juniors to learn more about software development and program management. As interns on the Visual Studio Web Tools team, our task was to create a web application template as a pilot for a set of templates showcasing new features and best practices in Razor Pages, the latest ASP.NET Core coding paradigm. We decided to build a blog template because of our familiarity with writing and reading blogs and because we believe that many developers would want a shortcut to build a personal or professional blog.

In our first week, the three of us all acted as Program Managers (PM) to prioritize features. Along with researching topics in web development, we had fun playing with different blog engines to help us brainstorm features for our project. After that, every three weeks we rotated between the PM and developer roles, with one of us acting as PM and the other two as developers. Working together, we’ve built a tool that we believe will inspire developers to build more web applications with Microsoft’s technologies and to contribute to the ASP.NET open source movement.

Introduction

This blog template is a tool to help developers quickly build a blog or similar web application. This blog template also serves as an example that shows how to build a web app from ASP.NET Core using the new Razor Pages architecture. Razor Pages effectively streamlines building a web application by associating HTML pages with C# code, rather than compartmentalizing a project into the Model-View-Controller pattern.

We believe that a blog template appeals to a broad audience of developers while also showcasing a variety of unique and handy features. The basic structure of the template is useful for developers interested in building an application beyond blogs, such as an ecommerce, photo gallery, or personal web site. All three alternatives are simply variations of a blog with authentication.

You can find our more detailed talk on the ASP.NET Community Standup about writing the blog template with code reviews and demos here. You can also access our live demo at https://venusblog.azurewebsites.net/ (Username: webinterns@microsoft.com, Password: Password.1).

Background

This template was designed to help Visual Studio users create new web applications fast and effortlessly. The various features built in the template make it a useful tool for developers:

  • Data is currently stored using XML files. This was an early design decision made to allow users on other blogs to move their data to this template smoothly.

    The usage of LINQ (Language Integrated Query) enables the developer to query items from the blog from a variety of sources such as databases, XML documents (currently in use), and in-memory objects without having to redesign or learn how elements are queried from a specific source.
  • The blog is built on Razor Pages from ASP.NET Core. The image below showcases the organization of the file structure that Razor Pages uses. Each view contains a corresponding Model in a C# file. Adding another Razor Page to your project is as simple as adding a new item to the Pages folder and choosing the Razor Page with model type.
  • The template includes a user authentication feature, implemented using the new ASP.NET Identity Library. This tool allows the owner of the blog to be the single user registered and in control of the blog. Identity also provided us with a tested and secure way to create and protect user profiles.
    We were able to use this library to implement login, registration, password recovery, and other user management features. To enable identity, we simply included it in the startup file and added the corresponding pages (with their models).
  • Customizing the theme is fast and flexible with the use of Bootstrap. Simply download a Bootstrap theme.min.css file and add it to the CSS folder in your project (wwwroot > css). You can find free or paid Bootstrap themes at websites such as bootswatch.com. You can delete our default theme file, journal-bootstrap.min.css, to remove the default theming. Run your project, and you’ll see that the style of your blog has changed instantly.
  • Entity Framework provides an environment that makes it easy to work with relational data. In our scenario, that data comes in the form of blog posts and comments for each post.

Using the Template

Creating an Instance of the Template (Your Own Blog)

There are two options for instantiating a template. You can use dotnet new included with the dotnet CLI. However, the current version contains minor bugs that will be fixed soon. Alternatively, you’ll need to get the newest templating code with the following steps. Click the green “Clone or download” button. Copy the link in the dropdown that appears. Open a command prompt and change directories to where you want to install the templating repo.

In the desired directory, enter the command:

git clone <link you copied earlier>

This will pull all the dotnet.templating code and put it in a directory named “templating”. Now change to the templating directory and switch branches to “rel/2.0.0-servicing” by running:

git checkout rel/2.0.0-servicing

Then run the command “setup”.

  • Note: If you get errors about not being able to run scripts, close your command window. Then open a powershell window as administrator and run the command “Set-ExecutionPolicy Unrestricted”. Close the powershell window, then open a new command prompt and go back to the templating directory and run setup again.

Once the setup runs correctly, you should be able to run the command “dotnet new3”. If you are just using the dotnet CLI, you can replace “dotnet new3” with “dotnet new” for the rest of the steps. Install your blog template with the command:

dotnet new3 -i [path to blog template source]

This path will be the root directory of your blog template repository.
Now you can create an instance of the template by running:

dotnet new3 blog -o [directory you want to create the instance in] -n [name for the instance]

For example:

dotnet new3 blog -o c:\temp\TestBlog\ -n “My Blog”

Reflection

We hope that our project encourages developers to build more web applications with Microsoft’s technologies and have fun doing so. Personally, we’ve learned a lot about web development and Razor Pages through developing this project. We’ve also developed useful skills to move forward in our careers. For example, we really enjoyed learning to brainstorm and prioritize features, which turned out to be a much more complicated process than any of us had expected. Sprint planning and time estimation also proved to be a tricky task. Sometimes it was hard to predict how much time it would take to implement certain features, but as we became more familiar with our project and our team’s engineering processes this became much easier.

Reaching out to the right people turned out to be a key ingredient to accelerating our development process and making sure we were building in the right direction. Once we began meeting with people outside of our assigned team, we realized almost immediately that it was a great way to get feedback on our project. We also began to look for the right people to ask our questions so the development of our project progressed even faster. Most importantly, we really appreciate how helpful and communicative our manager, Barry, and our mentors, Jimmy and Mads, were throughout the internship. They took time out of their busy schedules to help us and give us insightful career advice.

Juliet Daniel is a junior at Stanford studying Management Science & Engineering. In her free time, she enjoys biking, running, hiking, foodspotting, and playing music. She keeps a travel blog at juliets-journey.weebly.com.

Lucas Isaza is a junior at Stanford studying Economics and Applied Statistics. He enjoys playing basketball and lacrosse, exploring new restaurants in the area, and hanging out with friends.

Uma Lakshminarayan is a junior at UCLA studying Computer Science. She enjoys cooking and eating vegetarian foods, taking walks with friends, and discovering new music. You will usually find her singing or listening to music.

Announcing SignalR for ASP.NET Core 2.0

$
0
0

Today we are glad to announce an alpha release of SignalR for ASP.NET Core 2.0. This is the first official release of a new SignalR that is compatible with ASP.NET Core. It consists of a server component, a .NET client targeting .NET Standard 2.0 and a JavaScript/TypeScript client.

What’s New?

SignalR for ASP.NET Core is a rewrite of the original SignalR. We looked at common SignalR usage patterns and issues that users face today and decided that rewriting SignalR is the right choice. The new SignalR is simpler, more reliable, and easier to use. Despite these underlying changes, we’ve worked to ensure that the user-facing APIs are very similar to previous versions.

JavaScript/TypeScript Client

SignalR for ASP.NET Core has a brand-new JavaScript client. The new client is written in TypeScript and no longer depends on jQuery. The client can also be used from Node.js with a few additional dependencies.

The client is distributed as an npm module that contains the Node.js version of the client (usable via require), as well as a version for use in the browser which can be included using a <script> tag. TypeScript declarations for the client included in the module make it easy to consume the client from TypeScript applications.

The JavaScript client runs on the latest Chrome, FireFox, Edge, Safari and Opera browsers as well as Internet Explorer version 11, 10, 9. (Not all transports are compatible with every browser). Internet Explorer 8 and below is not supported.

Support for Binary Protocols

SignalR for ASP.NET Core offers two built-in hub protocols – a text protocol based on JSON and a binary protocol based on MessagePack. Messages using the MessagePack protocol are typically smaller than messages using the JSON protocol. For example a hub method returning the integer value of 1 will be 43 bytes when using the JSON based protocol while only 16 bytes when using MessagePack. (Note, the difference in size may vary depending on the message type, the contents of the message and the transport used – binary messages sent over Server Sent Events transport will be base64 encoded since Server Sent Events is a text transport.)

Support for Custom Protocols

The SignalR hub protocol is documented on GitHub and now has extension points that make it possible to plug in custom implementations.

Streaming

It is now possible to stream data from the server to the client. Unlike a regular Hub method invocation, streaming means the server is able to send results to the client before the invocation completes.

Using SignalR with Bare Websockets

The process of connecting to SignalR has been simplified to the point where, when using websockets, it is now possible to connect to the server without any client with a single request.

Simplified Scale-Out Model

Unfortunately, when it comes to scaling out applications there is no “one size fits all” model – each application is different and has different requirements that need to be considered when scaling out the application. We have worked to improve, and simplify, the scale-out model and are providing a Redis based scale-out component in this Alpha. Support for other providers is being evaluated for the final release, for example service bus.

What’s Changed?

We added a number of new features to SignalR for ASP.NET Core but we also decided to remove support for some of the existing features or change how they work. One of the consequences of this is that SignalR for ASP.NET Core is not compatible with previous versions of SignalR. This means that you cannot use the old server with the new clients or the old clients with the new server. Below are the features which have been removed or changed in the new version of SignalR.

Simplified Connection Model

In the existing version of SignalR the client would try starting a connection to the server, and if it failed it would try using a different transport. The client would fail starting the connection when it could not connect to the server with any of the available transports. This feature is no longer supported with the new SignalR.

Another functionality that is no longer supported is automatic reconnects. Previously SignalR would try to reconnect to the server if the connection was dropped. Now, if the client is disconnected the user must explicitly start a new connection if they want to reconnect. Note, that it was required even before – the client would stop its reconnect attempts if it could not reconnect successfully within the reconnect timeout. One more reason to remove automatic reconnects was a very high cost of storing messages sent to clients. The server would by default remember the last 1000 messages sent to a client so that it could replay messages the client missed when it was offline. Since each connection had its own buffer the memory footprint of storing these messages was very high.

Sticky Sessions Are Now Required

Because of how scale-out worked in the previous versions of SignalR, clients could reconnect and/or send messages to any server in the farm. Due to changes to the scale-out model, as well as not supporting reconnects, this is no longer supported. Now, once the client connects to the server it needs to interact with this server for the duration of the connection.

Single Hub per Connection

The new version of SignalR does not support having more than one Hub per connection. This results in a simplified client API, and makes it easier to apply Authentication policies and other Middleware to Hub connections. In addition subscribing to hub methods before the connection starts is no longer required.

Other Changes.

The ability to pass arbitrary state between clients and the Hub (a.k.a. HubState) has been removed as well as the support for Progress messages. We also don’t create a counterpart of hub proxies at the moment.

Getting Started

Setting up SignalR is relatively easy. After you create an ASP.NET Core application you need to add a reference to the Microsoft.AspNetCore.SignalR package like this

and a hub class:

This hub contains a method which once invoked will invoke the Send method on each connected client.

After adding a Hub class you need to configure the server to pass requests sent to the chatend point to SignalR:

Once you set up the server you can invoke hub methods from the client and receive invocations from the server. To use the JavaScript client in a browser you need to install the signalr-client npm module first using the following command:

then copy the signalr-client.js to your script folder and include on your page using the <script> tag:

After you include the script you can start the connection and interact with the server like this:

To use the SignalR managed client you need to add a reference to the Microsoft.AspNetCore.SignalR.Client package:

Then you can invoke hub methods and receive invocations like this:

If you want to take advantage of streaming you need to create a hub method that returns either a ReadableChannel<T> or an IObservable<T>. Here is an example of a hub method streaming stock prices to the client from the StockTicker sample we ported from the old SignalR:

The JavaScript code that invokes this hub method looks like this:

Each time the server sends a stream item the displayStock client function will be invoked.

Invoking a streaming hub method from a C# client and reading the items could look as follows:

Migrating from existing SignalR

We will be releasing a migrating from existing SignalR guide in the coming weeks.

Known issues

This is an alpha release and there are a few issues we know about:

  • Connections using the Server Sent Event transport may be disconnected after two minutes of inactivity if the server is running behind IIS
  • The WebSockets transport will not work if the server hosting SignalR is running behind IIS on Windows 7 or Windows Server 2008 R2, due to limitations in IIS
  • ServerSentEvents transport in the C# client can hang if the client is being closed while the data from the server is still being received
  • Streaming invocations cannot be canceled by the client
  • Generating a production build of an application using TypeScript client in angular-cli fails due to UglifyJS not supporting ES6. This issue can be worked around as described in this comment.

Summary

The long awaited version of SignalR for ASP.NET Core just shipped. Try it out and let us know what you think! You can provide feedback or let us know about bugs/issues here.

Announcing SignalR for ASP.NET Core Alpha 2

$
0
0

A few weeks ago we released the alpha1 version of SignalR for ASP.NET Core 2.0. Today we are pleased to announce a release of the alpha2 version of SignalR for ASP.NET Core 2.0. This release contains a number of changes (including API changes) and improvements.

Notable Changes

  • The JSON hub protocol now uses camel casing by default when serializing and deserializing objects on the server and by the C# client
  • IObservable subscriptions for streaming methods are now automatically unsubscribed when the connection is closed
  • It is now possible to invoke client methods in a type safe manner when using HubContext (a community contribution from FTWinston– Thanks!)
  • A new HubConnectionContext.Abort() method allows terminating connections from the server side
  • Users can now control how their objects are serialized when using MessagePack hub protocol
  • Length prefixes used in binary protocols are now encoded using Varints which reduces the size of the message by up to 7 bytes

Release notes can be found on github.

API Changes

TypeScript/JavaScript client:

  • Event names were changed and now use lower case:
    • onDataReceived on IConnection was renamed to onreceive
    • onClosed on HubConnection and IConnection was renamed to onclose
  • It is now possible to register multiple handlers for the HubConnection onclose event by passing the handler as a parameter. The code used to subscribe to the closed event when using the alpha1 version of the client:

needs to be changed to:

  • The HubConnection on() method now allows registering multiple callbacks for a client method invocation
  • A new off() method was added to HubConnection to enable removing callbacks registered with the on method

C# Client

The HubConnection.Stream() method was changed to be async and renamed to StreamAsync()
New overloads of WithJsonHubProtocol() and WithMessagePackProtocol() on HubConnectionBuilder that take protocol-specific settings were added

Server

The params keyword was removed from the IClientProxy.InvokeAsync() method and replaced by a set of extension methods

A word of thanks to everyone who has tried the new SignalR and provided feedback. Please keep it up! You can provide feedback or let us know about bugs/issues here.

For examples on using this, and future, versions you can look at the SignalR Samples repository on GitHub.

User accounts made easy with Azure

$
0
0

One of the most common requirements for a web application is to have users create accounts, for the purpose of access control and personalization. While ASP.NET templates have always made it easy to create an application that uses a database you control to register and track user accounts, that introduces other complications over the long term. As laws around user information get stricter and security becomes more important, maintaining a database of users and passwords comes with an increasing set of maintenance and regulatory challenges.

A few weeks ago I tried out the new Azure Active Directory B2C service, and was really impressed with how easy it was to use. It added user identity and access control to my app, while moving all the responsibility for signing users up, authenticating them, and maintaining the account database to Azure (and it’s free to develop with).

In this post I’ll briefly walk through how to get up and running with Azure B2C in a new ASP.NET Core app. It’s worth noting it works just as well with ASP.NET apps on the .NET Framework with slightly different steps (see walkthrough). I’ll then include some resources that will help you with more complex scenarios including authenticating against a backend Web API.

Step 1: Create the B2C Tenant in Azure

  • To get started, you’ll need an Azure account. If you don’t have one yet, create your free account now
  • Create an Azure AD B2C Directory
  • Create your policies (this is where you indicate what you need to know about the user)
    • Create a sign-up or sign-in policy
      • Choose all of the information you want to know about the user under “Sign-up attributes”
      • Indicate all the information you want passed to your application under “Application Claims” (note: the default template uses the “Display Name” attribute in the navigation bar so you will want to include that)
        clip_image002
    • Create a profile editing policy
    • Create a password reset policy
    • Note: After you create each policy, you’ll be taken back to the tab for that policy type which will show you the full name of the policy you just created, which will be in the form “B2C_1_<name_you_entered>”.  You’ll need these names below when creating your project.
      image
  • Register your application (follow the instructions for a Web App)
    • Note: You’ll get the “Reply URL” in the next step when you create the new project.

Step 2: Create the Project in Visual Studio

  • File -> New Project -> Visual C# -> ASP.NET Core Web Application
    clip_image004
  • On the New ASP.NET dialog, click the “Change Authentication” button on the right side of the dialog
    image
    • Choose “Individual User Accounts”
    • Change the dropdown in the top right to “Connect to an existing user store in the cloud”
    • Fill in the required information from the B2C Tenant you created in the Azure portal previously
    • Copy the “Reply URI” from the “Change Authentication” dialog and enter it into the application properties for the app you previously created in your B2C tenant in the Azure portal.
    • Click OK
      clip_image006

Step 3: Try it out

Now run your application (ctrl+F5), and click “Sign in” in the top right:

clip_image008

You’ll be navigated to Azure’s B2C sign-in/sign-up page:

clip_image010

The first time, click the “Sign up now” at the bottom to create your account. Once your account is created, you’ll be redirected back to your app and you’re now signed in. It’s as easy that.

clip_image012

Additional Resources

The above walk through provided a quick overview for how to get started with Azure B2C and ASP.NET Core. If you are interested in exploring further or using Azure B2C in a different context, here are a few resources that you may find useful:

  • Create an ASP.NET (.NET Framework) app with B2C
  • ASP.NET Core GitHub sample: This sample demonstrates how to use a web front end to authenticate, and then obtain a token to authenticate against a backend Web API.
  • If you are looking to add support to an existing app, you may find it easiest to create a new project in Visual Studio and copy and paste the relevant code into your existing application. You can of course use code from the GitHub samples mentioned above as well

Conclusion

Hopefully you found this short overview of Azure B2C interesting. Authentication is often much more complex than the simple scenario we covered here, and there is no single “one size fits all”, so it should be pointed out that there are many alternative options, including third-party and open source options. As always, feel free to let me know what you think in the comments section below, or via twitter.


Sharing Configuration in ASP.NET Core SPA Scenarios

$
0
0

This is a guest post from Mike Rousos

ASP.NET Core 2.0 recently released and, with it, came some new templates, including new project templates for single-page applications (SPA) served from an ASP.NET Core backend. These templates make it easy to setup a web application with a rich JavaScript frontend and powerful ASP.NET Core backend. Even better, the templates enable server-side prerendering so the JavaScript front-end is already rendered and ready to display when users first arrive at your web app.

One challenge of the SPA scenario, though, is that there are two separate projects to manage, each with their own dependencies, configuration, etc. This post takes a look at how ASP.NET Core’s configuration system can be used to store configuration settings for both the backend ASP.NET Core app and a front-end JavaScript application together.

Getting Started

To get started, you’ll want to create a new ASP.NET Core Angular project – either by creating a new ASP.NET Core project in Visual Studio and selecting the Angular template, or using the .NET CLI command dotnet new angular.

New ASP.NET Core Angular Project

At this point, you should be able to restore client packages (npm install) and launch the application.

In this project template, the ASP.NET Core app’s configuration is loaded from default sources thanks to the WebHost.CreateDefaultBuilder call in Program.cs. The default configuration providers include:

  • appsettings.json
  • appsettings.{Environment}.json
  • User secrets (if in a development environment)
  • Environment variables
  • Command line arguments

You can see that appsettings.json already has some initial config values related to logging.

For the client-side application, there aren’t any configuration values setup initially. If we were using the Angular CLI to create and manage this application, it would provide environment-specific TypeScript files (environment.ts, environment.prod.ts, etc.) to provide settings specific to different environments. The Angular CLI would pick the right config file to use when building or serving the application, based on the environment specified. In our case, though, we’re not using the Angular CLI to build the client (we’re just using WebPack directly).

Instead of using client-side TypeScript files for configuration, it would be convenient to share portions of our server app’s configuration with the client app. That would enable us to use ASP.NET Core’s rich configuration system which can load from environment-specific config files, as well as from many other sources (environment variables, Azure Key Vault, etc.). We just need to make those config settings available to our client app.

Embedding Client Configuration

Since our goal is to store client and server settings together in the ASP.NET Core app, it’s helpful to define the shape of the client config settings by creating a class modeling the configuration data. This isn’t required (you could just send settings as raw json), but if the structure of your configuration isn’t frequently changing, it’s a little nicer to work with strongly typed objects in C# and TypeScript.

Here’s a simple class for storing sample client configuration data:

Next, we can use configuration Options to easily extract a ClientConfiguration object from the server application’s larger configuration.

Here are the calls to add to Startup.ConfigureServices to make a ClientConfiguration options object available in the web app’s dependency injection container:

Notice that we’ve specified that the ClientConfiguration object comes from the “ClientConfiguration” section of the app’s configuration, so that’s where we need to add config values in appsettings.json (or via environment variables, etc.):

If you want to set these sorts of hierarchical settings using environment variables, the variable name should include all levels of the setting’s hierarchy delimited by colons or double underscores. So, for example, the ClientConfiguration section’s UserMessage setting could be set from an environment variable by setting ClientConfiguration__UserMessage (or ClientConfiguration:UserMessage) equal to some value.

Creating a Client Configuration Endpoint

There are a number of ways that we can make configuration settings from our server application available to the client. One easy option is to create a web API that returns configuration settings.

To do that, let’s create a ClientConfiguration controller (which receives the ClientConfiguration options object as a constructor parameter):

Next, give the controller a single index action which, as you may have guessed, just returns the client configuration object:

At this point, you can launch the application and confirm that navigating to /ClientConfiguration returns configuration settings extracted from those configured for the web app. Now we just have to setup the client app to use those settings.

Creating a Client-Side Model and Configuration Service

Since our client configuration is strongly typed, we can start implementing our client-side config retrieval by making a configuration model that matches the one we made on the server. Create a configuration.ts file like this:

Next, we’ll want to handle app config settings in a service. The service will use Angular’s built-in Http service to request the configuration object from our web API. Both the Http service and our application’s ‘BASE_URL’ (the web app’s root address which we’ll call back to to reach the web API) can be injected into the configuration service’s constructor.

Then, we just create a loadConfiguration function to make the necessary GET request, deserialize into a Configuration object, and store the object in a local field. We convert the http request into a Promise (instead of leaving it as an Observable) so that it works with Angular’s APP_INITIALIZER function (more on this later!).

The finished configuration service should look something like this:

Now that we have a configuration service, we need to register it in app.module.shared.ts to make it available to other components. The ASP.NET Core Angular template puts most module setup for our client app in app.module.shared.ts (instead of app.module.ts) since there are separate modules for server-side rendering and client-side rendering.App.module.shared.ts contains the module pieces common to both scenarios.

To register our service, we need to import it and then add it to a providers array passed to the @NgModule decorator:

There’s one other important change to make before we leave app.module.shared.ts. We need to make sure that config values are loaded from the server before any components are rendered. To do that, we add ConfigurationService.loadConfiguration to our app’s APP_INITIALIZER function (which is called at app-initiazliation time and waits for returned promises to finish prior to any components being rendered).

Import APP_INITIALIZER from @angular/core and then update your providers argument to include a registration for APP_INITIALIZER:

Note that useFactory is a function that must return a function (which, in turn, returns a promise), so we have the double fat-arrow syntax seen above. Also, don’t forget to specify multi: true since there may be multiple APP_INITIALIZER functions registered.

Now the configuration service is registered with DI and will automatically load configuration from the server when the app starts up.

To make use of it, let’s update the app’s home component. Import ConfigurationService into the home component and update the component’s constructor to take an instance of the service as a parameter. Make sure to make the parameter public so that it can be used from the home component’s HTML template. Since we will want to loop over the ‘messageCount’ config setting, it’s also useful to create a small helper function to return an array with a length of messageCount for use with *ngFor in the HTML template later.

Here’s my simple home component:

Finally, get rid of everything currently in home.component.html and replace it with an HTML template that takes advantage of the configuration values:

Trying it Out

You should now be able to run the web app and see the server-side configuration values reflected in the client application!

Here’s a screenshot of my sample app running with ASPNETCORE_ENVIRIONMENT set to Development (I set MessageCount to 2 in appsettings.Development.json):

Development Environment Results

And here’s one with ASPNETCORE_ENVIRONMENT set to Production (where MessageCount is three and the message is appropriately updated):

Production Environment Results

Wrap Up

By exposing (portions of) app configuration from our ASP.NET Core app and making use of Angular’s APP_INITIALIZER function, we can share configuration values between server and client apps. This allows our client apps to take advantage of ASP.NET Core’s rich configuration system. In this sample, the client configuration settings were only used by our Angular app, but if your scenario includes some settings that are needed by both the client and server applications, this sort of solution allows you to set the config values in one place and have them be available to both applications.

Future improvements of this sample could include adding a time-to-live on the client’s cached configuration object to allow automatically reloaded config values to propagate to the client, or perhaps using different configuration providers to show Angular app configuration coming from Azure Key Vault or some other less common source.

Further Reading

 

Recent updates for publishing

$
0
0

We have recently added a few interesting features for ASP.NET publishing. The updates include:

  • Container Registry Publish Updates
  • Create publish profile without publishing

In this post, we will briefly describe these updates. We will get started with the container related news.

Container Registry Publish Updates

Container development (e.g. Docker) has grown in popularity recently, including in .NET development. We’ve started adding support for containerized applications in Visual Studio as well. When developing a containerized app, there are two components that are needed to run your application.

  • App image
  • Host to run the container

The app image includes the application itself and info about configuring the machine hosting the application.

The host machine will load the app image and run it. There are a variety of options for the host machine that can be used. In previous releases we supported publishing a containerized ASP.NET Core project to Azure Container Registry (ACR) and creating a new Azure App Service to host the application. If you were running your application using a different host, Visual Studio wouldn’t have helped. Now in Visual Studio we have the following container publish related features:

  • A: Publish an ASP.NET Core containerized app to ACR and a new Azure App Service (Visual Studio 2017 15.0)
  • B: Publish an ASP.NET (Core or full .NET Framework) containerized project to a container registry (including, but not limited to, ACR) (Visual Studio 2017 15.5 Preview 2)

 

Feature A enabled Azure App Service users to run a containerized ASP.NET Core app to a new Azure App Service host. This feature was included in the initial release Visual Studio 2017. We are including it here for completeness. To publish one of these apps to App Service you’ll use the Microsoft Azure App Service Linux option in the publish page. See the next image.

After selecting this option you’ll be prompted to configure the new App Service instance and the container registry settings.

For feature B, we have added a new Container Registry publish option on the Publish page. You can see an image of that below.

The radio buttons below the Container Registry button lists out the different options. Let’s take a closer look at those in the table below.

 

Option When to use
Create New Azure Container Registry Select this option when you want to publish your app image to a new Azure Container Registry. You can publish several different app images to the same Container Registry.
Select Exiting Azure Container Registry Select this option when you’ve already created the Azure Container Registry, and you want to publish a new app image to it.
Docker hub Select this option if you want to publish to docker hub (hub.docker.com).
Custom Select this option to explicitly set publish options.

 

After selecting the appropriate option and clicking the Publish button, you’ll be prompted to complete the configuration and continue to publish. The Container Registry publish feature is enabled for both ASP.NET Core and ASP.NET full .NET Framework projects.

To try out the Azure related features you’ll need an Azure subscription. If you don’t already have one you can get started for free.

We’ve only briefly covered the Container Registry features here. We will be blogging more soon about how to use this in end-to-end scenarios here. Until then you can take a look at the docs. Now let’s move on to the next update.

Create Publish Profile Without Publishing

In Visual Studio publishing to a new destination includes two steps:

  • Create Publish Profile
  • Start publishing

In Visual Studio 2017 15.5 Preview 2 we have added a new gear option next to the Publish button. In previous releases of Visual Studio 2017 when you created a publish profile, the publish process was automatically started immediately after that. This prevented you from changing publish settings for the initial publish. We’ve heard feedback from users that in some cases the publish options need to be customized before the initial publish. Some reasons you may chose to delay the publish process includes; you need to configure databases, you need to change the Build Configuration used, you want to validate publish credentials before publish, etc. In the image below you can see the new gear option highlighted.

To create a publish profile and not publish, after selecting the publish destination (by clicking on one of the big buttons) and then clicking on the gear you’ll get a context menu with two options. You’ll want to select Create Profile.

 

 

 

After you select Create Profile here, you’ll continue to create the profile, and any new Azure resources if applicable. You can then publish your app at a later time with the Publish button. The following image shows this button.

Now that we’ve covered the delayed publish feature, let’s wrap up.

Conclusion

These were some updates that we wanted to share with you. We’ll be blogging more soon about how to use the container features in full scenarios. If you have any questions, please comment below or email me at sayedha AT microsoft.com or on Twitter @SayedIHashimi. You can also use the built in send feedback feature in Visual Studio 2017.

Thanks,
Sayed Ibrahim Hashimi

Publishing a Web App to an Azure VM from Visual Studio

$
0
0

We know virtual machines (VMs) are one of the most popular places to run apps in Azure, but publishing to a VM from Visual Studio has been a tricky experience for some. So, we’re pleased to announce that in Visual Studio 15.5 (get the preview now) we’ve added some improvements to the experience. In this post, we’ll discussed the requirements for a VM that’s ready to run an ASP.NET web application, and then walk through how to publish to it from Visual Studio 15.5 Preview 2. Also, if you have a minute to tell us about how you work with VMs, we’d appreciate it.

Contents

    – Prepare your VM for publishing
    – Walk-through: Publishing from Visual Studio
    – Modifying publish settings [Optional]

Prepare your VM for publishing

Before you can publish a web application to an Azure Virtual Machine from Visual Studio, the VM must be properly configured. The minimum requirements are listed below.

    Server Components:
        • IIS
        • ASP.NET 4.6
        • Web Management Service
        • Web Deploy
    Open firewall ports:
        • Port 80 (http)
        • Port 8172 (Web Deploy)
    DNS:
        • A DNS name assigned to the VM

Walk-through: Publishing a web app to an Azure Virtual Machine from Visual Studio 2017

  1. Open your web application in Visual Studio 2017 (v15.5 Preview 2)
  2. Right-click the project and choose “Publish…”
  3. Press the arrow on the right side of the page to scroll through the publishing options until you see “Microsoft Azure Virtual Machine”.
  4. Select the “Microsoft Azure Virtual Machine” icon, then click “Browse…” to open the Azure Virtual Machine selector.
    The Azure VM selector dialog will open.
  5. Choose the appropriate account (with Azure subscription connected to your virtual machine).
    • If you’re signed in to Visual Studio, the account list will be pre-populated with all your authenticated accounts.
    • If you are not signed in, or if the account you need is not listed, choose “Add an account…” and follow the prompts to log in.

    Wait for the list of Existing Virtual Machines to populate. (Note: This can take some time).

  6. From the Existing Virtual Machines list, select the VM that you intend to publish your web application to, then press “OK”.

    Focus returns to the Publish page with the Azure Virtual Machine populated and the “Publish” button enabled.

  7. Press the “Publish” button to create the publish profile and begin publishing to your Azure VM.
    Note: You can delay publishing so you can configure additional settings prior to your first publish as covered later in the post.
  8. When prompted for User name and Password, enter the credentials of a user who is authorized for publishing web applications on the VM, then press “OK”.
    Note: For new VMs, this is usually the administrator account. To enable non-administrator user accounts with permission to publish via WebDeploy, go into the authentication settings in IIS -> Management Service on the VM.
  9. If prompted, accept the security certificate.
  10. Publishing proceeds.
    You can watch the progress in the Output window.
    When publishing completes, a web browser will launch and open at the destination URL of the web site hosted on the Azure VM.
    Note: If you don’t want the web browser launching after each publish, remove the “Destination URL” from the Publish Profile settings.

Success!

At this point, you have finished publishing your web application to the VM.
The Publish page refreshes with the new profile selected and the details shown in the Summary section.

You can return to this screen any time to publish again, rename or delete the profile, launch the web site in a browser, or modify the publish settings.
Read on to learn about some interesting settings.

Modify Publish Settings [Optional]

After the Publish Profile has been created, you can edit the settings to tweak your publishing experience.
To modify the settings of the publish profile, click the “Settings…” link on the Publish page.

This will open the Publish Profile Settings dialog.

Save user credentials to the profile

To avoid having to provide user name and password each time you publish, you can store the user credentials in the publish profile.

  1. In the “User name” and “Password” fields, enter the credentials of an authorized user on the target VM.
  2. Press “Validate Connection” to confirm that the details are correct.
  3. Choose “Save password” if you don’t want to be prompted to enter the password each time you publish.
  4. Click “Next” to progress to the “Settings” tab, or click “Save” to accept the changes and close the dialog.
Ensure a clean publish each time

To ensure that your web application is uploaded to a clean web site each time you publish, you can configure the publish profile to delete all files on the target web server before publishing.

  1. Go into the “Settings” page of the Publish dialog.
  2. Expand the File Publish Options.
  3. Choose “Remove additional files at destination”.
    Warning! Deleting files on the target VM may have undesired effects, including removing files that were uploaded by other team members, or files generated by the application. Please be sure you know the state of the machine before publishing with this option enabled.

Conclusion

We’d love for you to download the 15.5 Preview and let us know what you think of the new experience. Also, if you could take two minutes to tell us about how you use VMs in the cloud, we’d appreciate it. As always please let us know what you think in the comments section below, by using the send feedback tool in Visual Studio, or via Twitter.

Creating a Minimal ASP.NET Core Windows Container

$
0
0

This is a guest post by Mike Rousos

One of the benefits of containers is their small size, which allows them to be more quickly deployed and more efficiently packed onto a host than virtual machines could be. This post highlights some recent advances in Windows container technology and .NET Core technology that allow ASP.NET Core Windows Docker images to be reduced in size dramatically.

Before we dive in, it’s worth reflecting on whether Docker image size even matters. Remember, Docker images are built by a series of read-only layers. When using multiple images on a single host, common layers will be shared, so multiple images/containers using a base image (a particular Nano Server or Server Core image, for example) will only need that base image present once on the machine. Even when containers are instantiated, they use the shared image layers and only take up additional disk space with their writable top layer. These efficiencies in Docker mean that image size doesn’t matter as much as someone just learning about containerization might guess.

All that said, Docker image size does make some difference. Every time a VM is added to your Docker host cluster in a scale-out operation, the images need to be populated. Smaller images will get the new host node up and serving requests faster. Also, despite image layer sharing, it’s not unusual for Docker hosts to have dozens of different images (or more). Even if some of those share common layers, there will be differences between them and the disk space needed can begin to add up.

If you’re new to using Docker with ASP.NET Core and want to read-up on the basics, you can learn all about containerizing ASP.NET Core applications in the documentation.

A Starting Point

You can create an ASP.NET Core Docker image for Windows containers by checking the ‘Enable Docker Support’ box while creating a new ASP.NET Core project in Visual Studio 2017 (or by right-clicking an existing .NET Core project and choosing ‘Add -> Docker Support’).

Adding Docker Support

To build the app’s Docker image from Visual Studio, follow these steps:

  1. Make sure the docker-compose project is selected as the solution’s startup project.
  2. Change the project’s Configuration to ‘Release’ instead of ‘Debug’.
    1. It’s important to use Release configuration because, in Debug configuration, Visual Studio doesn’t copy your application’s binaries into the Docker image. Instead, it creates a volume mount that allows the application binaries to be used from outside the container (so that they can be easily updated without rebuilding the image). This is great for a code-debug-fix cycle, but will give us incorrect data for what the Docker image size will be in production.
  3. Push F5 to build (and start) the Docker image.

Visual Studio Docker Launch Settings

Alternatively, the same image can be created from a command prompt by publishing the application (dotnet publish -c Release) and building the Docker image (docker build -t samplewebapi --build-arg source=bin/Release/netcoreapp2.0/publish .).

The resulting Docker image has a size of 1.24 GB (you can see this with the docker images or docker history commands). That’s a lot smaller than a Windows VM and even considerably smaller than Windows Server Core containers or VMs, but it’s still large for a Docker image. Let’s see how we can make it smaller.

Initial Template Image

Windows Nano Server, Version 1709

The first (and by far the greatest) improvement we can make to this image size has to do with the base OS image we’re using. If you look at the docker images output above, you will see that although the total image is 1.24 GB, the majority of that (more than 1 GB) comes from the underlying Windows Nano Server image.

The Windows team recently released Windows Server, version 1709. One of the great features in 1709 is an optimized Nano Server base image that is nearly 80% smaller than previous Nano Server images. The Nano Server, version 1709 image is only about 235 MB on disk (~93 MB compressed).

The first thing we should do to optimize our ASP.NET Core application’s image is to use that new Nano Server base. You can do that by navigating to the app’s Dockerfile and replacing FROM microsoft/aspnetcore:2.0 with FROM microsoft/aspnetcore:2.0-nanoserver-1709.

Be aware that in order to use Nano Server, version 1709 Docker images, the Docker host must be running either Windows Server, version 1709 or Windows 10 with the Fall Creators Update, which is rolling out worldwide right now. If your computer hasn’t received the Fall Creators Update yet, don’t worry. It is possible to create Windows Server, version 1709 virtual machines in Azure to try out these new features.

After switching to use the Nano Server, version 1709 base image, we can re-build our Docker image and see that its size is now 357 MB. That’s a big improvement over the original image!

If you’re building your Docker image by launching the docker-compose project from within Visual Studio, make sure Visual Studio is up-to-date (15.4 or later) since recent updates are needed to launch Docker containers based on Nano Server, version 1709 from Visual Studio.

AspNet Core v1709 Docker Image

That Might be Small Enough

Before we make the image any smaller, I want to pause to point out that for most scenarios, using the Nano Server, version 1709 base image is enough of an optimization and further “improvements” might actually make things worse. To understand why, take a look at the sizes of the component layers of the Docker image created in the last step.

AspNet Core v1709 Layers

The largest layers are still the OS (the bottom two layers) and, at the moment, that’s as small as Windows images get. Our app, on the other hand is the 373 kB towards the top of the layer history. That’s already quite small.

The only good places left to optimize are the .NET Core layer (64.9 MB) or the ASP.NET Core layer (53.6 MB). We can (and will) optimize those, but in many cases it’s counter-productive to do so because Docker shares layers between images (and containers) with common bases. In other words, the ASP.NET Core and .NET Core layers shown in this image will be shared with all other containers on the host that use microsoft/aspnetcore:2.0-nanoserver-1709 as their base image. The only additional space that other images consume will be the ~500 kB that our application added on top of the layers common to all ASP.NET Core images. Once we start making changes to those base layers, they won’t be sharable anymore (since we’ll be pulling out things that our app doesn’t need but that others might). So, we might reduce this application’s image size but cause others on the host to increase!

So, bottom line: if your Docker host will be hosting containers based on several different ASP.NET Core application images, then it’s probably best to just have them all derive from microsoft/aspnetcore:2.0-nanoserver-1709 and let the magic of Docker layer sharing save you space. If, on the other hand, your ASP.NET Core image is likely to be used alongside other non-.NET Core images which are unlikely to be able to share much with it anyhow, then read on to see how we can further optimize our image size.

Reducing ASP.NET Core Dependencies

The majority of the ~54 MB contributed by the ASP.NET Core layer of our image is a centralized store of ASP.NET Core components that’s installed by the aspnetcore Dockerfile. As mentioned above, this is useful because it allows ASP.NET Core dependencies to be shared between different ASP.NET Core application Docker images. If you have a small ASP.NET Core app (and don’t need the sharing), it’s possible to just bundle the parts of the ASP.NET Core web stack you need with your application and skip the rest.

To remove unused portions of the ASP.NET Core stack, we can take the following steps:

  1. Update the Dockerfile to use microsoft/dotnet:2.0.0-runtime-nanoserver-1709 as its base image instead of microsoft/aspnetcore:2.0-nanoserver-1709.
  2. Add the line ENV ASPNETCORE_URLS http://+:80 to the Dockerfile after the FROM statement (this was previously done in the aspnetcore base image for us).
  3. In the app’s project file, replace the Microsoft.AspNetCore.All metapackage dependency with dependencies on just the ASP.NET Core components the app requires. In this case, my app is a trivial ‘Hello World’ web API, so I only need the following (larger apps would, of course, need more ASP.NET Core packages):
    1. Microsoft.AspNetCore
    2. Microsoft.AspNetCore.Mvc.Core
    3. Microsoft.AspNetCore.Mvc.Formatters.Json
  4. Update the app’s Startup.cs file to callservices.AddMvcCore().AddJsonFormatters() instead of services.AddMvc() (since the AddMvc extension method isn’t in the packages we’ve referenced).
    1. This works because our sample project is a Web API project. An MVC project would need more MVC services.
  5. Update the app’s controllers to derive from ControllerBase instead ofController
    1. Again, since this is a Web API controller instead of an MVC controller, it doesn’t use the additional functionality Controller adds).

Now when we build the Docker image, we can see we’ve shaved a little more than 40 MB by only including the ASP.NET Core dependencies we need. The total size is now 315 MB.

NanoServer No AspNet All

Bear in mind that this is a trivial sample app and a real-world application would not be able to cut as much of the ASP.NET Core framework.

Also, notice that while we eliminated the 54 MB intermediate ASP.NET Core layer (which could have been shared), we’ve increased the size of our application layer (which cannot be shared) by about 11 MB.

Trimming Unused Assemblies

The next place to consider saving space will be from the .NET Core/CoreFX layer (which is consuming ~65 MB at the moment). Like the ASP.NET Core optimizations, this is only useful if that layer wasn’t going to be shared with other images. It’s also a little trickier to improve because unlike ASP.NET Core, .NET Core’s framework is delived as a single package (Microsoft.NetCore.App).

To reduce the size of .NET Core/CoreFX files in our image, we need to take two steps:

  1. Include the .NET Core files as part of our application (instead of in a base layer).
  2. Use a preview tool to trim unused assemblies from our application.

The result of those steps will be the removal of any .NET Framework (or remaining ASP.NET Core framework) assemblies that aren’t actually used by our application.

First, we need to make our application self-contained. To do that, add a <RuntimeIdentifiers>win10-x64</RuntimeIdentifiers> property to the project’s csproj file.

We also need to update our Dockerfile to use microsoft/nanoserver:1709 as its base image (so that we don’t end up with two copies of .NET Core) and useSampleWebApi.exe as our image’s entrypoint instead of dotnet SampleWebApi.dll.

Up until now, it was possible to build the Docker image either from Visual Studio or the command line. But Visual Studio doesn’t currently support building Docker images for self-contained .NET Core applications (which are not typically used for development-time debugging). So, to build our Docker image from here on out, we will use the following command line interface commands (notice that they’re a little different from those shown previously since we are now publishing a runtime-specific build of the application). Also, you may need to delete (or update) the .dockerignore file generated as part of the project’s template because we’re now copying binaries into the Docker image from a different publish location.

dotnet publish -c Release -r win10-x64
docker build -t samplewebapi --build-arg
   source=bin/Release/netcoreapp2.0/win10-x64/publish .

These changes will cause the .NET Core assemblies to be deployed with our application instead of in a shared location, but the included files will be about the same. To remove unneeded assemblies, we can use Microsoft.Packaging.Tools.Trimming, a preview package that removes unused assemblies from a project’s output and publish folders. To do that, add a package reference to Microsoft.Packaging.Tools.Trimming and a <TrimUnusedDependencies>true</TrimUnusedDependencies> property to the application’s project file.

After making those additions, re-publishing, and re-building the Docker image (using the CLI commands shown above), the total image size is down to 288 MB.

NanoServer SelfContained Trim Dependencies

As before, this reduction in total image size does come at the expense of a larger top layer (up to 53 MB).

One More Round of Trimming

We’re nearly done now, but there’s one more optimization we can make.Microsoft.Packaging.Tools.Trimming removed some unused assemblies, but others still remain because it isn’t clear whether dependencies to those ones assemblies are actually exercised or not. And that’s not to mention all the IL in an assembly that may be unused if our application calls just one or two methods from it.

There’s another preview trimming tool, the .NET IL Linker, which is based on the Mono linker and can remove unused IL from assemblies.

This tool is still experimental, so to reference it we need to add a NuGet.config to our solution and include https://dotnet.myget.org/F/dotnet-core/api/v3/index.json as a package source. Then, we add a dependency to the latest preview version of ILLink.Tasks(currently, the latest version is 0.1.4-preview-981901).

ILLink.Tasks will trim IL automatically, but we can get a report on what it has done by passing /p:ShowLinkerSizeComparison=true to our dotnet publish command.

After one more publish and Docker image build, the final size for our Windows ASP.NET Core Web API container image comes to 271 MB!

NanoServer Double Trim

Even though trimming ASP.NET Core and .NET Core Framework assemblies isn’t common for most containerized projects, the preview trimming tools shown here can be very useful for reducing the size of large applications since they can remove application-local assemblies (pulled in from NuGet, for example) and IL code paths that aren’t used.

Wrap-Up

This post has shown a series of optimizations that can help to reduce ASP.NET Core Docker image size. In most cases, all that’s needed is to be sure to use new Nano Server, version 1709 base images and, if your app is large, to consider some preview dependency trimming options like Microsoft.Packaging.Tools.Trimming or the .NET IL Linker.

Less commonly, you might also consider using app-local versions of the ASP.NET Core or .NET Core Frameworks (as opposed to shared ones) so that you can trim unused dependencies from them. Just be careful to keep common base image layers unchanged if they’re likely to be shared between multiple images. Although this article presented the different trimming and minimizing options as a series of steps, you should feel free to pick and choose the techniques that make sense for your particular scenario.

In the end, a simple ASP.NET Core web API sample can be packaged into a < 360 MB Windows Docker image without sacrificing any ability to share ASP.NET Core and .NET Core base layers with other Docker images on the host and, potentially, into an even smaller image (271 MB) if that sharing is not required.

Improvements to Azure Functions in Visual Studio

$
0
0

We’re excited to announce several improvements to the Azure Functions experience in Visual Studio as part of the latest update to the Azure Functions tools on top of Visual Studio 2017 v15.5. (Get the preview now.)

New Function project dialog

To make it easier to get up and running with Azure Functions, we’ve introduced a new Functions project dialog. Now, when creating a Functions project, you can choose one that starts with the one of the most popular trigger types (Http, Queue or Timer). If you’re looking for something different choose the Empty project, then add the item after project creation.

Additionally, most Function apps require a valid storage account to be specified in AzureWebJobsStorage. Typically this has meant adding a connection string to the local.settings.json after the function is created. To make it easier to find and configure the connection strings for your Function’s storage account, we’ve introduced a Storage Account picker in the new project dialog.

Storage account picker in new Functions project dialog

The default option is the Storage Emulator. The Storage Emulator is a local service, installed as part of the Azure workload, that offers much of the functionality of a real Azure storage account. If it’s not already running, you can start it by pressing the Windows Start key and typing “Microsoft Azure Storage Emulator”. This is a great option if you’re looking to get up and running quickly – especially if you’re playing around, as it doesn’t require any resources to be provisioned in Azure.

However, the best way to guarantee that all supported features are available to your Functions project is to configure it to use an Azure storage account. To help with this, we’ve added a Browse… option in the Storage Account picker that launches the Azure Storage Account selection dialog. This lets you choose from existing storage accounts that you have access to through your Azure subscriptions.

When the project is created, the connection string for the selected storage account will be added to the local.settings.json file and you’ll be able to run your Functions project straight away!

.NET Core support

You can now create Azure Functions projects inside Visual Studio that target .NET Core. When creating a Functions project, you can choose a target from the selector at the top of the new project dialog. If you choose the Azure Functions v2 (.NET Standard) target, your project will run against .NET Core or .NET Framework.

Choose Azure Functions runtime

Manage Application Settings

An important part of deploying Functions to Azure is adding appropriate application settings. Azure Functions projects store local settings in the local.settings.json file, but this file does not get published to Azure (by design). So, the settings that control the application running in Azure need to be manually configured. As part of our new tooling improvements, we’ve added the ability for you to view and edit your Function’s app settings in the cloud from within Visual Studio. On the Publish page of the Connected Services dialog, you’ll find an option to Manage Application Settings….

Manage App Settings link in Publish dialog

This launches the Application Settings dialog, which allows you to view, update, add and remove app settings just like you would on the Azure portal. When you’re satisfied with the changes, you can press Apply to push the changes to the server.

Application Settings editor

Detect mismatching Functions runtime versions

To prevent the issue where you are developing locally against an out-of-date version of the runtime, now, after publishing a Functions app, we’ll compare your local runtime version against the portal’s version. If they are different, Visual Studio will offer to change the app settings on the cloud to match the version you are using locally.

Update mismatching Functions extension version

Try out the new features

Download the latest version of Visual Studio 2017 (v15.5) and start enjoying the improved Functions experience today.

Ensure you have the Azure workload installed and the latest version of the Azure Web Jobs and Functions Tools.
Note: If you have a fresh installation, you may need to manually apply the update to Azure Functions and Web Jobs Tools. Look for the new notifications flag in the Visual Studio title bar. Clicking the link in the Notifications window opens the Extensions and Updates dialog. From there you can click Update to upgrade to the latest version.

Update notifications

If you have any questions or comments, please let us know by posting in the comments section below.

Announcing .NET 4.7.1 Tools for the Cloud

$
0
0

Packages and ContainersToday we are releasing a set of providers for ASP.NET 4.7.1 that make it easier than ever to deploy your applications to cloud services and take advantage of cloud-scale features.  This release includes a new CosmosDb provider for session state and a collection of configuration builders.

A Package-First Approach

With previous versions of the .NET Framework, new features were provided “in the box” and shipped with Windows and new versions of the entire framework.  This means that you can be assured that your providers and capabilities were available on every current version of Windows.  It also means that you had to wait until a new version of Windows to get new .NET Framework features.  We have adopted a strategy starting with .NET Framework 4.7 to deliver more abstract features with the framework and deploy providers through the NuGet package manager service.  There are no concrete ConfigurationBuilder classes in the .NET Framework 4.7.1, and we are now making available several for your use from the NuGet.org repository.  In this way, we can update and deploy new ConfigurationBuilders without requiring a fresh install of Windows or the .NET Framework.

ConfigurationBuilders Simplify Application Management

In .NET Framework 4.7.1 we introduced the concept of ConfigurationBuilders.  ConfigurationBuilders are objects that allow you to inject application configuration into your .NET Framework 4.7.1 application and continue to use the familiar ConfigurationManager interface to read those values.  Sure, you could always write your configuration files to read other config files from disk, but what if you wanted to apply configuration from environment variables?  What if you wanted to read configuration from a service, like Azure Key Vault?  To work with those configuration sources, you would need to rewrite some non-trivial amount of your application to consume these services.

With ConfigurationBuilders, no code changes are necessary in your application.  You simply add references from your web.config or app.config file to the ConfigurationBuilders you want to use and your application will start consuming those sources without updating your configuration files on disk.  One form of ConfigurationBuilder is the KeyValueConfigBuilder that matches a key to a value from an external source and adds that pair to your configuration.  All of the ConfigurationBuilders we are releasing today support this key-value approach to configuration.  Lets take a look at using one of these new ConfigurationBuilders, the EnvironmentConfigBuilder.

When you install any of our new ConfigurationBuilders into your application, we automatically allocate the appropriate new configSections in your app.config or web.config file as shown below:

The new “builders” section contains information about the ConfigurationBuilders you wish to use in your application.  You can declare any number of ConfigurationBuilders, and apply the settings they retrieve to any section of your configuration.  Let’s look at applying our environment variables to the appSettings of this configuration.  You specify which ConfigurationBuilders to apply to a section by adding the configBuilders attribute to that section, and indicate the name of the defined ConfigurationBuilder to apply, in this case “Environment”

<appSettings configBuilders="Environment">
  <add key="COMPUTERNAME" value="VisualStudio" />
</appSettings>

The COMPUTERNAME is a common environment variable set by the Windows operating system that we can use to replace the VisualStudio setting defined here.  With the below ASPX page in our project, we can run our application and see the following results.

AppSettings Reported in the Browser

AppSettings Reported in the Browser

The COMPUTERNAME setting is overwritten by the environment variable.  That’s a nice start, but what if I want to read ALL the environment variables and add them as application settings?  You can specify Greedy Mode for the ConfigurationBuilder and it will read all environment variables and add them to your appSettings:

<add name="Environment" mode="Greedy"
  type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

There are several Modes that you can apply to each of the ConfigurationBuilders we are releasing today:

  • Greedy – Read all settings and add them to the section the ConfigurationBuilder is applied to
  • Strict – (default) Update only those settings where the key matches the configuration source’s key
  • Expand – Operate on the raw XML of the configuration section and do a string replace where the configuration source’s key is found.

The Greedy and Strict options only apply when operating on AppSettings or ConnectionStrings sections.  Expand can perform its string replacement on any section of your config file.

You can also specify prefixes for your settings to be handled by adding the prefix attribute.  This allows you to only read settings that start with a known prefix.  Perhaps you only want to add environment variables that start with “APPSETTING_”, you can update your config file like this:

<add name="Environment"
     mode="Greedy" prefix="APPSETTING_"
     type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

Finally, even though using the APPSETTING_ prefix is a nice catch to only read those settings, you may not want your configuration to actually be called “APPSETTING_Setting” in code.  You can use the stripPrefix attribute (default value is false) to omit the prefix when the value is added to your configuration:

Greedy AppSettings with Prefixes Stripped

Greedy AppSettings with Prefixes Stripped

Notice that the COMPUTERNAME was not replaced in this mode.  You can add a second EnvironmentConfigBuilder to read and apply settings by adding another add statement to the configBuilders section and adding an entry to the configBuilders attribute on the appSettings:

Try using the EnvironmentConfigBuilder from inside a Docker container to inject configuration specific to your running instances of your application.  We’ve found that this significantly improves the ability to deploy existing applications in containers without having to rewrite your code to read from alternate configuration sources.

Secure Configuration with Azure Key Vault

We are happy to include a ConfigurationBuilder for Azure Key Vault in this initial collection of providers.  This ConfigurationBuilder allows you to secure your application using the Azure Key Vault service, without any required login information to access the vault.  Add this ConfigurationBuilder to your config file and build an add statement like the following:

<add name="AzureKeyVault"
     mode="Strict"
     vaultName="MyVaultName"
     type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure" />

If your application is running on an Azure service that has , this is all you need to read configuration from the vault and add it to your application.  Conversely, if you are not running on a service with MSI, you can still use the vault by adding the following attributes:

  • clientId – the Azure Active Directory application key that has access to your key vault
  • clientSecret – the Azure Active Directory application secret that corresponds to the clientId

The same mode, prefix, and stripPrefix features described previously are available for use with the AzureKeyVaultConfigBuilder.  You can now configure your application to grab that secret database connection string from the keyvault “conn_mydb” setting with a config file that looks like this:

You can use other vaults by using the uri attribute instead of the vaultName attribute, and providing the URI of the vault you wish to connect to.  More information about getting started configuring key vault is available online.

Other Configuration Builders Available

Today we are introducing five configuration builders as a preview for you to use and extend:

This new collection of ConfigurationBuilders should help make it easier than ever to secure your applications with Azure Key Vault, or orchestrate your applications when you add them to a container by no longer embedding configuration or writing extra code to handle deployment settings.

We plan to fully release the source code and make these providers open source prior to removing the preview tag from them.

Store SessionState in CosmosDb

Today we are also releasing a session state provider for Azure Cosmos Db.  The globally available CosmosDb service means that you can geographically load-balance your ASP.NET application and your users will always maintain the same session state no matter the server they are connected to.  This async provider is available as a NuGet package and can be added to your project by installing that package and updating the session state provider in your web.config as follows:

<connectionStrings
  <add name="myCosmosConnString"
       connectionString="- YOUR CONNECTION STRING -"/>
</connectionStrings>
<sessionState mode="Custom" customProvider="cosmos">
  <providers>
    <add name="cosmos"
         type="Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync, Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync"
         connectionStringName="myCosmosConnString"/>
  </providers>
</sessionState>

Summary

We’re continuing to innovate and update .NET Framework and ASP.NET.  With these new providers, they should make it easier to deploy your applications to Azure or make use of containers without having to rewrite your application.  Update your applications to .NET 4.7.1 and start using these new providers to make your configuration more secure, and to start using CosmosDb for your session state.

Orchard Core Beta 1 released

$
0
0

This is a guest post by Sebastien Ros on behalf of the Orchard community

Two years ago, the Orchard community started developing Orchard on .NET Core. After 1,500 commits, 297,000 lines of code, 127 projects, we think it’s time to release a public version, namely Orchard Core Beta 1.

What is Orchard Core?

If you know what Orchard and .NET Core are, then it might seem obvious: Orchard Core is a redevelopment of Orchard on ASP.NET Core.

Orchard Core consists of two different targets:

  • Orchard Core Framework: An application framework for building modular, multi-tenant applications on ASP.NET Core.
  • Orchard Core CMS: A Web Content Management System (CMS) built on top of the Orchard Core Framework.

It’s important to note the differences between the framework and the CMS. Some developers who want to develop SaaS applications will only be interested in the modular framework. Others who want to build administrable websites will focus on the CMS and build modules to enhance their sites or the whole ecosystem.

Beta

Quoting Jeff Atwood on https://blog.codinghorror.com/alpha-beta-and-sometimes-gamma/:

“The software is complete enough for external testing — that is, by groups outside the organization or community that developed the software. Beta software is usually feature complete, but may have known limitations or bugs. Betas are either closed (private) and limited to a specific set of users, or they can be open to the general public.”

It means we feel confident that developers can start building applications and websites using the current state of development. There are bugs, limitations and there will be breaking changes, but the feedback has been strong enough that we think it’s time to show you what we have accomplished so far.

Building Software as a Service (SaaS) solutions with the Orchard Core Framework

It’s very important to understand the Orchard Core Framework is distributed independently from the CMS on nuget.org. We’ve made some sample applications on https://github.com/OrchardCMS/OrchardCore.Samples that will guide you on how to build modular and multi-tenant applications using just Orchard Core Framework without any of the CMS specific features.

One of our goals is to enable community-based ecosytems of hosted applications which can be extended with modules, like e-commerce systems, blog engines and more. The Orchard Core Framework enables a modular environment that allows different teams to work on separate parts of an application and make components reusable across projects.

What’s new in Orchard Core CMS

Orchard Core CMS is a complete rewrite of Orchard CMS on ASP.NET Core. It’s not just a port as we wanted to improve the performance drastically and align as close as possible to the development models of ASP.NET Core.

  • Performance. This might the most obvious change when you start using Orchard Core CMS. It’s extremely fast for a CMS. So fast that we haven’t even cared about working on an output cache module. To give you an idea, without caching Orchard Core CMS is around 20 times faster than the previous version.
  • Portable. You can now develop and deploy Orchard Core CMS on Windows, Linux and macOS. We also have Docker images ready for use.
  • Document database abstraction. Orchard Core CMS still requires a relational database, and is compatible with SQL Server, MySQL, PostgreSQL and SQLite, but it’s now using a document abstraction (YesSql) that provides a document database API to store and query documents. This is a much better approach for CMS systems and helps performance significantly.
  • NuGet Packages. Modules and themes are now shared as NuGet packages. Creating a new website with Orchard Core CMS is actually as simple as referencing a single meta package from the NuGet gallery. It also means that updating to a newer version only involves updating the version number of this package.
  • Live preview. When editing a content item, you can now see live how it will look like on your site, even before saving your content. And it also works for templates, where you can browse any page to inspect the impact of a change on templates as you type it.
  • Liquid templates support. Editors can safely change the HTML templates with the Liquid template language. It was chosen as it’s both very well documented (Jekyll, Shopify, …) and secure.
  • Custom queries. We wanted to provide a way to developers to access all their data as simply as possible. We created a module that lets you create custom ad-hoc SQL, and Lucene queries that can be re-used to display custom content, or exposed as API endpoints. You can use it to create efficient queries, or expose your data to SPA applications.
  • Recipes. Recipes are scripts that can contain content and metadata to build a website. You can now include binary files, and even use them to deploy your sites remotely from a staging to a production environment for instance. They can also be part of NuGet Packages, allowing you to ship predefined websites.
  • Scalability. Because Orchard Core is a multi-tenant system, you can host as many websites as you want with a single deployment. A typical cloud machine can then host thousands of sites in parallel, with database, content, theme and user isolation.

Resources

Development plan

The Orchard Core source code is available on GitHub.

There are still many important pieces to add and you might want to check our roadmap, but it’s also the best time to jump into the project and start contributing new modules, themes, improvements, or just ideas.

Feel free to drop on our dedicated Gitter chat and ask questions.


Improve website performance by optimizing images

$
0
0

We all want our web applications to load as fast as possible to give the best possible experience to the users. One of the steps to achieve that is to make sure the images we use are as optimized as possible.

If we can reduce the file size of the images then we can significantly reduce the weight of the website. This is important for various reasons, including:

  • Less bandwidth needed == cheaper hosting
  • The website loads faster
  • Faster websites have higher conversion rates
  • Less data needed to load your page on mobile devices (mobile data can be expensive)

To optimize images is always better for the user and therefore for you too, but it’s something that is easy to forget to do and a bit cumbersome. So, let’s look at a couple of options that are simple to use.

All these options use great optimization algorithms that are capable of reducing the file size of images by up to 75% without any noticeable quality loss.

Gulp

If you are already using Gulp, then using the gulp-imagemin package is a good option. When configured it will automatically optimize the images as part of your build.

Pros:

  • Can be automated as part of a build
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • Requires some configuration
  • Increases the build time sometimes by a lot
  • Doesn’t optimize dynamically added images

Visual Studio Image Optimizer

The Image Optimizer extension for Visual Studio is one of the most popular extensions due to its simplicity of use and strong optimization algorithms.

Pros:

  • Remarkably simple to use – no configuration
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • No build time support
  • Doesn’t optimize dynamically added images

Azure Image Optimizer

Installing the Azure.ImageOptimizer NuGet package into any ASP.NET application will automatically optimize images once the app is deployed to Azure App Services with zero code changes to the web application. It uses the same algorithms as the Image Optimizer extension for Visual Studio.

To try out the Azure Image Optimizer you’ll need an Azure subscription. If you don’t already have one you can get started for free.

This is the only solution that optimize images dynamically added at runtime such as a user uploaded profile pictures.

Pros:

  • Remarkably simple to use
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • Optimizes dynamically added images
  • Set it and forget it
  • It is open source

Cons:

  • Only works on Azure App Service

To understand how the Azure Image Optimizer works, check out the documentation on GitHub. Spoiler alert – it is an Azure Webjob running next to your web application.

Final thoughts

There are many more options for image optimizations that I didn’t cover, but it doesn’t really matter how you chose to optimize the images. The important part is that you optimize them.

My personal preference is to use the Image Optimizer extension for Visual Studio to optimize the known images and combine that with the Azure.ImageOptimizer NuGet package to handle any dynamically added images at runtime.

For more information about image optimization techniques check out Addy Osmani’s very comprehensive eBook Essential Image Optimization.

Configuring HTTPS in ASP.NET Core across different platforms

$
0
0

As the web moves to be more secure by default, it’s more important than ever to make sure your websites have HTTPS enabled. And if you’re going to use HTTPS in production its a good idea to develop with HTTPS enabled so that your development environment is as close to your production environment as possible. In this blog post we’re going to go through how to setup an ASP.NET Core app with HTTPS for local development on Windows, Mac, and Linux.

This post is primarily focused on enabling HTTPS in ASP.NET Core during development using Kestrel. When using Visual Studio you can alternatively enable HTTPS in the Debug tab of your app to easily have IIS Express enable HTTPS without it going all the way to Kestrel. This closely mimics what you would have if you’re handling HTTPS connections in production using IIS. However, when running from the command-line or in a non-Windows environment you must instead enable HTTPS directly using Kestrel.

The basic steps we will use for each OS are:

  1. Create a self-signed certificate that Kestrel can use
  2. Optionally trust the certificate so that your browser will not warn you about using a self-signed certificate
  3. Configure Kestrel to use that certificate

You can also reference the complete Kestrel HTTPS sample app

Create a certificate

Windows

Use the New-SelfSignedCertificate Powershell cmdlet to generate a suitable certificate for development:

New-SelfSignedCertificate -NotBefore (Get-Date) -NotAfter (Get-Date).AddYears(1) -Subject "localhost" -KeyAlgorithm "RSA" -KeyLength 2048 -HashAlgorithm "SHA256" -CertStoreLocation "Cert:\CurrentUser\My" -KeyUsage KeyEncipherment -FriendlyName "HTTPS development certificate" -TextExtension @("2.5.29.19={critical}{text}","2.5.29.37={critical}{text}1.3.6.1.5.5.7.3.1","2.5.29.17={critical}{text}DNS=localhost")

Linux & Mac

For Linux and Mac we will use OpenSSL. Create a file https.config with the following data:

Run the following command to generate a private key and a certificate signing request:

openssl req -config https.config -new -out csr.pem

Run the following command to create a self-signed certificate:

openssl x509 -req -days 365 -extfile https.config -extensions v3_req -in csr.pem -signkey key.pem -out https.crt

Run the following command to generate a pfx file containing the certificate and the private key that you can use with Kestrel:

openssl pkcs12 -export -out https.pfx -inkey key.pem -in https.crt -password pass:<password>

Trust the certificate

This step is optional, but without it the browser will warn you about your site being potentially unsafe. You will see something like the following if you browser doesn’t trust your certificate:

Windows

To trust the generated certificate on Windows you need to add it to the current user’s trusted root store:

  1. Run certmgr.msc
  2. Find the certificate under Personal/Certificates. The “Issued To” field should be localhost and the “Friendly Name” should be HTTPS development certificate
  3. Copy the certificate and paste it under Trusted Root Certification Authorities/Certificates
  4. When Windows presents a security warning dialog to confirm you want to trust the certificate, click on “Yes”.

Linux

There is no centralized way of trusting the a certificate on Linux so you can do one of the following:

  1. Exclude the URL you are using in your browsers exclude list
  2. Trust all self-signed certificates on localhost
  3. Add the https.crt to the list of trusted certificates in your browser.

How exactly to achieve this depends on your browser/distro.

Mac

Option 1: Command line

Run the following command:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain https.crt

Some browsers, such as Chrome, require you to restart them before this trust will take affect.

Option 2: Keychain UI

If you open the “Keychain Access” app you can drag your https.crt into the Login keychain.

Configure Kestrel to use the certificate we generated

To configure Kestrel to use the generated certificate, add the following code and configuration to your application.

Application code

This code will read a set of HTTP server endpoint configurations from a custom section in your app configuration settings and then apply them to Kestrel. The endpoint configurations include settings for configuring HTTPS, like which certificate to use. Add the code for the ConfigureEndpoints extension method to your application and then call it when setting up Kestrel for your host in Program.cs:

Windows sample configuration

To configure your endpoints and HTTPS settings on Windows you could then put the following into your appsettings.Development.json, which configures an HTTPS endpoint for your application using a certificate in a certificate store:

Linux and Mac sample configuration

On Linux or Mac your appsettings.Development.json would look something like this, where your certificate is specified using a file path:

You can then use the user secrets tool, environment variables, or some secure store such as Azure KeyVault to store the password of your certificate using the HttpServer:Endpoints:Https:Password configuration key instead of storing the password in a file that goes into source control.

For example, to store the certificate password as a user secret during development, run the following command from your project:

dotnet user-secrets set HttpServer:Endpoints:Https:Password

To override the certificate password using an environment variable, create an environment variable named HttpServer:Endpoints:Https:Password (or HttpServer__Endpoints__Https__Password if your system does not allow :) with the value of the certificate password.

Run your application

When running from Visual Studio you can change the default launch URL for your application to use the HTTPS address by modifying the launchSettings.json file:

Redirect from HTTP to HTTPS

When you setup your site to use HTTPS by default, you typically want to allow HTTP requests, but have them redirected to the corresponding HTTPS address. In ASP.NET Core this can be accomplished using the URL rewrite middleware. Place the following code in the Configure method of your Startup class:

Conclusion

With a little bit of work you can setup your ASP.NET Core 2.0 site to always use HTTPS. For a future release we are working to simplify setting up HTTPS for ASP.NET Core apps and we plan to enable HTTPS in the project templates by default. We will share more details on these improvements as they become publicly available.

Testing ASP.NET Core MVC web apps in-memory

$
0
0

This post was written and submitted by Javier Calvarro Nelson, a developer on the ASP.NET Core MVC team

Testing is an important part of the development process of any app. In this blog post we’re going to explore how we can test ASP.NET Core MVC app using an in-memory server. This approach has several advantages:

  • It’s very fast because it does not start a real server
  • It’s reliable because there is no need to reserve ports or clean up resources after it runs
  • It’s easier than other ways of testing your application, such as using an external test driver
  • It allows testing of traits in your application that are hard to unit test, like ensuring your authorization rules are correct

The main shortcoming of this approach is that it’s not well suited to test applications that heavily rely on JavaScript. That said, if you’re writing a traditional web app or an API then all the benefits mentioned above apply.

For testing MVC app we’re going to use TestServer. TestServer is an in-memory implementation of a server for ASP.NET Core app akin to Kestrel or HTTP.sys.

Creating and setting up the projects

Start by creating an MVC app using the following command:

dotnet new mvc -au Individual -uld --use-launch-settings -o .\TestingMVC\src\TestingMVC

Create a test project with the following command:

dotnet new xunit -o .\TestingMVC\test\TestingMVC.Tests

Next create a solution, add the projects to the solution and add a reference to the app project from the test project:

dotnet new sln
dotnet sln add .\src\TestingMVC\TestingMVC.csproj
dotnet sln add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj
dotnet add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj reference .\src\TestingMVC\TestingMVC.csproj

Add references to the components we’re going to use for testing by adding the following item group to the test project file:

Now, we can run dotnet restore on the project or the solution and we can move on to writing tests.

Writing a test to retrieve the page at ‘/’

Now that we have our projects set up, let’s wirte a test that will serve as an example of how other tests will look.

We’re going to start by changing Program.cs in our app project to look like this:

In the snippet above, we’ve changed the method IWebHost BuildWebHost(string[] args) to call a new method IWebHostBuilder CreateWebHostBuilder(string[] args) within it. The reason for this is that we want to allow our tests to configure the IWebHostBuilder in the same way the app does and to allow making changes required by tests. (By chaining calls on the WebHostBuilder.)

One example of this will be setting the content root of the app when we’re running the server in a test. The content root needs to be based on the appliation’s root, not the test’s root.

Now, we can create a test like the one below to get the contents of our home page. This test will fail because we’re missing a couple of things that we describe below.

The test above can be decomposed into the following actions:

  • Create an IWebHostBuilder in the same way that my app creates it
  • Override the content root of the app to point to the app’s project root instead of the bin folder of the test app. (.\src\TestingMVC instead of .\test\TestingMvc.Tests\bin\Debug\netcoreapp2.0)
  • Create a test server from the WebHost builder
  • Create an HttpClient that can be used to communicate with our app. (This uses an internal mechanism that sends the requests in-memory – no network involved.)
  • Send an HTTP request to the server using the client
  • Ensuring the status code of the response is correct

Requirements for Razor views to run on a test context

If we tried to run the test above, we will probably get an HTTP 500 error instead of an HTTP 200 success. The reason for this is that the dependency context of the app is not correctly set up in our tests. In order to fix this, there are a few actions we need to take:

  • Copy the .deps.json file from our app to the bin folder of the testing project
  • Disable shadow copying assemblies

For the first bullet point, we can create a target file like the one below and include in our testing project file as follows:

For the second bullet point, the implementation is dependent on what testing framework we use. For xUnit, add an xunit.runner.json file in the root of the test project (set it to Copy Always) like the one below:

This step is subject to change at any point; for more information look at the xUnit docs at http://xunit.github.io/#documentation.

Now if you re-run the sample test, it will pass.

Summary

  • We’ve seen how to create in-memory tests for an MVC app
  • We’ve discussed the requirements for setting up the app to find static files and find and compile Razor views in the context of a test
  • Set up the content root in the tests to the app’s root folder
  • Ensure the test project references all the assemblies in the app
  • Copy the app’s deps file to the bin folder of the test project
  • Disable shadow copying in your testing framework of choice
  • We’ve shown how to write a functional test in-memory using TestServer and the same configuration your app uses when running on a real server in Production

The source code of the completed project is available here: https://github.com/aspnet/samples/tree/master/samples/aspnetcore/mvc/testing/TestingMVC

Happy testing!

Take a Break with Azure Functions

$
0
0

So, it’s the Holidays. The office is empty, the boss is away, and you’ve got a bit of free time on your hands. How about learning a new skill and having some fun?

Azure Functions are a serverless technology that executes code based on various triggers (i.e. a URL is called, an item is placed on a queue, a file is added to blob storage, a timer goes off.) There’s all sorts of things you can do with Azure Functions, like running high CPU-bound calculations, calling various web services and reporting results, sending messages to groups – and nearly anything you can imagine. But unlike traditional applications and services, there’s no need to set up an application host or server that’s constantly running, waiting to respond to requests or triggers. Azure Functions are deployed as and when needed, to as many servers as needed, to meet the demands of incoming requests. There’s no need to set up and maintain hosting infrastructure, you get automatic scaling, and – best of all – you only pay for the cycles used while your functions are being executed.

Want to have a go and try your hand at the latest in web technologies? Follow along to get started with your own Azure Functions.

In this post I’ll show you how to create an Azure Function that triggers every 30 minutes and writes a note into your slack channel to tell you to take a break. We’ll create a new Function app, generate the access token for Slack, then run the function locally.

Prerequisites:

Create a Function App (Timer Trigger)

We all know how important it is to take regular breaks if you spend all day sitting at a desk, right? So, in this tutorial, we’ll use a Timer Trigger function to post a message to a Slack channel at regular intervals to remind you (and your whole team) to take a break. A Timer Trigger is a type of Azure Function that is triggered to run on regular time intervals.

Just run it

If you want to skip ahead and run the function locally, fetch the source from this repo, insert the appropriate Slack channel(s) and OAuth token in the local.settings.json file, start the Azure Storage Emulator, then Run (or Debug) the Functions app in Visual Studio.

Step-by-step guide
  1. Open Visual Studio 2017 and select File->New Project.
  2. Select Azure Functions under the Visual C# category.
  3. Provide a name (e.g. TakeABreakFunctionApp) and press OK.
    The New Function Project dialog will open.
  4. Select Azure Functions v1 (.NET Framework), chose Timer trigger and press OK.
    Note: This will also work with Azure Functions v2, but for this tutorial I’ve chosen v1, since v2 is still in preview.

    New Timer Trigger

    A new solution is created with a Functions App project and single class called Function1 that contains a basic Timer trigger.

  5. Edit Function1.cs.
    • Add helper methods:
      • Env (for fetching environment variables)
      • SendHttpRequest (for sending authenticated http requests)
      • SendMessageToSlack (for generating and sending the appropriate Slack request – based on environment variables)
    • Update method: Run
      • Change the return type to async Task.
      • Add an asynchronous call to the SendMessageToSlack method.
      • Update Chron settings for the TimerTrigger attribute.
    • Add appropriate Using statements.

  6. The completed code should look like this:

    using System;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using System.Threading.Tasks;
    using Microsoft.Azure.WebJobs;
    using Microsoft.Azure.WebJobs.Host;
    
    namespace TakeABreakFunctionsApp
    {
        public static class Function1
        {
            [FunctionName("Function1")]
            public static async Task Run([TimerTrigger("0 */30 * * * *")]TimerInfo myTimer, TraceWriter log)
            {
                log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
                await SendMessageToSlack("You're working too hard. How about you take a break?", log);
            }
    
            private static async Task SendMessageToSlack(string message, TraceWriter log)
            {
                // Fetch environment variables (from local.settings.json when run locally)
                string channel = Env("ChannelToNotify");
                string slackbotUrl = Env("SlackbotUrl");
                string bearerToken = Env("SlackOAuthToken");
    
                // Prepare request and send via Http
                log.Info($"Sending to {channel}: {message}");
                string requestUrl = $"{slackbotUrl}?channel={Uri.EscapeDataString(channel)}&text={Uri.EscapeDataString(message)}";
                await SendHttpRequest(requestUrl, bearerToken);
            }
    
            private static async Task SendHttpRequest(string requestUrl, string bearerToken)
            {
                HttpClient httpClient = new HttpClient();
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken);
                HttpResponseMessage response = await httpClient.GetAsync(requestUrl);
            }
    
            private static string Env(string name) => Environment.GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
        }
    }
  7. Edit local.settings.json.
    Add the following environment variables.
    • SlackbotUrl – The URL for the Slack API to post chat messages
    • SlackOAuthToken – An OAuth token that grants permission for your app to send messages to a Slack workspace.
      – See below for help generating a Slack OAuth token.
    • ChannelToNotify – The Slack channel to send messages to
  8. Your local.settings.json should look something like this:
    (Your SlackOAuthToken and ChannelToNotify variables will be specific to your Slack workspace.)

    {
      "IsEncrypted": false,
      "Values": {
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "AzureWebJobsDashboard": "UseDevelopmentStorage=true",
        "SlackbotUrl": "https://slack.com/api/chat.postMessage",
        "SlackOAuthToken": "[insert your generated token]",
        "ChannelToNotify": "[your channel id]"
      }
    }

Your Functions app is now ready to run! You just need to grab an authorization token for your Slack workspace.

Generate an OAuth token for your app to send messages to your Slack workspace

Before you can post a message to a Slack workspace, you must first tell Slack about the app and assign specific permissions for the app to send messages as a bot. Once you’ve installed the app to the Slack workspace, you will be issued an OAuth token that you can send with your http requests. For full details, you can follow the instructions here. Otherwise, follow the steps below.

  • Click here to register your new Functions app with your Slack workspace.
  • Provide a name (e.g. “Take a Break”) and select the appropriate Slack workspace, then press Create App.
  • Create A Slack App

    When the app is registered with Slack, the Slack API management page opens for the new app.

  • Select OAuth & Permissions from the navigation menu on the left.
  • In the OAuth & Permissions page, scroll down to Scopes, select the permission chat:write:bot, then select Save Changes.
  • Select Permission Scopes

  • After the scope permissions have been created and the page has refreshed, scroll to the top of the OAuth & Permissions page and select Install App to Workspace.
  • Slack Install App to Workspace

  • A confirm page opens. Review the details, then click Authorize.
  • Your OAuth Access Token is generated and presented at the top of the page.
  • OAuth Access Token

  • Copy this token and add it to your local.settings.json as the value for SlackOAuthToken.

    Note: The OAuth access token is a secret and should not be made public. If you check this token into a public source control system like GitHub, Slack will find it and permanently disable it!

Run your Functions App on your local machine

Now that you’ve registered your app with Slack and have provided a valid OAuth token in your local.settings.json, you can run the Function locally.

Start the local Storage Emulator

You can configure your function to use a storage account on Azure. But if your app is configured to use development storage (which is the default for new Functions), then it will run against the local Azure Storage Emulator. Therefore, you’ll need to make sure the Storage Emulator is started before running your Functions app.

  • Open the Windows Start Menu and search for “Storage Emulator”.

Microsoft Azure Storage Emulator will launch. You can manage it via the icon in the Windows System Tray.

Start the Function app from Visual Studio
  • Press Ctrl+F5 to build and run the Functions app.
  • If prompted, update to the latest Functions tools.
  • A new command window launches and displays the log output from the Functions app.

Function App Running

After a certain period of time, the Timer trigger will fire and send a message to your Slack workspace.

Function Timer Executes

You should see the message appear in the appropriate Slack channel.

Message Appears In Slack

Feel free to play around with the Timer Chron options in the Run method’s attributes to configure the function to execute at the intervals you’d like. Here are some example Chron settings.
        Trigger Chron format: (seconds minutes hours days months years)
        (“0 */15 6-20 * * *”) = Every 15 minutes, between 06:00 AM and 08:59 PM
        (“0 0 0-5,21-23 * * *”) = Every hour from 12:00 AM to 06:00 AM and 09:00 PM to 12:00 AM

Congratulations! You’ve written a working Azure Functions App with a Timer trigger function.

What’s next?


Publish your Functions App to the cloud
So that your Functions app is always available, and can be accessed globally (eg. For Http trigger types), you can publish your app to the cloud. This article describes the process of publishing a Functions app to Azure.

Experiment with other Functions types
There’s an excellent collection of open-source samples available here. Poke around and see what takes your interest.

Tell us about your experience with Azure Functions
We’d love to hear about your experience with Azure Functions. If you’ve got a minute, please complete this short survey.
As always, feel free to leave comments and questions in the space below.

Happy holidays!

Justin Clareburt
Senior Program Manager
Visual Studio and .NET

Announcing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we are releasing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client.

You can find the full list of features and bug fixes for this release in the release notes.

To update an existing project to use this preview release run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4-preview1

ASP.NET Web API Client support for .NET Standard

The ASP.NET Web API Client package provides strongly typed extension methods for accessing Web APIs using a variety of formats (JSON, XML, form data, custom formatter). This saves you from having to manually serialize or deserialize the request or response data. It also enables using .NET types to share type information about the request or response with the server and client.

This release adds support for .NET Standard 2.0 to the ASP.NET Web API Client. .NET Standard is a standardized set of APIs that when implemented by .NET platforms enables library sharing across .NET implementations. This means that the Web API client can now be used by any .NET platform that supports .NET Standard 2.0, including cross-platform ASP.NET Core apps that run on Windows, macOS, or Linux. The .NET Standard version of the Web API client is also fully featured (unlike the PCL version) and has the same API surface area as the full .NET Framework implementation.

For example, let’s use the new .NET Standard support in the ASP.NET Web API Client to call a Web API from an ASP.NET Core app running on .NET Core. The code below shows an implementation of a ProductsClient that uses the Web API client helper methods (ReadAsAsync<T>(), Post/PutAsJsonAsync<T>()) to get, create, update, and delete products by making calls to a products Web API:

Note that all the serialization and deserialization is handled for you. The ReadAsAsync<T>() methods will also handle selecting an appropriate formatter for reading the response based on its content type (JSON, XML, etc.).

This ProductsClient can then be used to call the Products Web API from your Razor Pages in an ASP.NET Core 2.0 app running on .NET Core (or from any .NET platform that supports .NET Standard 2.0). For example, here’s how you can use the ProductsClient from the page model for a page that lets you edit the details for a product:

For more details on using the ASP.NET Web API Client see Call a Web API From a .NET Client (C#).

Please try out Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 and let us know what you think! Any feedback can be submitted as issues on GitHub. Assuming everything with this preview goes smoothly, we expect to ship a stable release of these packages by the end of the month.

Enjoy!

Viewing all 311 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>