Quantcast
Channel: ASP.NET Blog
Viewing all 311 articles
Browse latest View live

Announcing ASP.NET Core 1.1 Preview 1

$
0
0

Today we are happy to announce the release of ASP.NET Core 1.1 Preview 1. This release includes a bunch of great new features along with many bug fixes and general enhancements. We invite you to try out the new features and to provide feedback.

To update an existing project to ASP.NET Core 1.1 Preview 1 you will need to do the following:

  1. Download and install the updated .NET Core 1.1 Prevew 1 SDK
  2. Follow the instructions on the .NET Core 1.1 Preview 1 announcement to update your project to use .NET Core 1.1 Preview 1
  3. Update your ASP.NET Core packages dependencies to use the new 1.1.0-preview1 versions

Note: To updated your packages to 1.1 Preview 1 with the NuGet Package Manager in Visual Studio you will need to download and install NuGet Package Manager for Visual Studio 2015 3.5 RC1 or later from nuget.org.

You should now be ready to try out 1.1!

What’s new?

The following new features are available for preview in this release:

  • URL Rewriting middleware
  • Response caching middleware
  • Response compression middleware
  • WebListener server
  • View Components as Tag Helpers
  • Middleware as MVC filters
  • Cookie-based TempData provider
  • View compilation
  • Azure App Service logging provider
  • Azure Key Vault configuration provider
  • Redis and Azure Storage Data Protection Key Repositories

For additional details on the changes included in this release please check out the release notes.

Let’s look at some of these features that are ready for you to try out in this preview:

URL Rewriting Middleware

We are bringing URL rewriting functionality to ASP.NET Core through a middleware component that can be configured using IIS standard XML formatted rules, Apache Mod_Rewrite syntax, or some simple C# methods coded into your application. This allows mapping a public URL space, designed for consumption of your clients, to whatever representation the downstream components of your middleware pipeline require as well as redirecting clients to different URLs based on a pattern.

For example, you could ensure a canonical hostname by rewriting any requests to http://example.com to instead be http://www.example.com for everything after the re-write rules have run. Another example is to redirect all requests to http://example.com to https://example.com. You can even configure URL rewrite such that both rules are applied and all requests to example.com are always redirected to SSL and rewritten to www.

We can get started with this middleware by adding a reference to our web application for the Microsoft.AspNetCore.Rewrite package.  This allows us to add a call to configure RewriteOptions in our Startup.Configure method for our rewriter:

As you can see, we can both force a rewrite and redirect with different rules.

  • Url Redirect sends an HTTP 301 Moved Permanently status code to the client with the new address
  • Url Rewrite gives a different URL to the next steps in the HTTP pipeline, tricking it into thinking a different address was requested.

Response Caching Middleware

Response Caching similar to the OutputCache capabilities of previous ASP.NET releases can now be activated in your application by adding the Microsoft.AspNetCore.ResponseCaching and the Microsoft.Extensions.Caching.Memory packages to your application.  You can add this middleware to your application in the Startup.ConfigureServices method and configure the response caching from the Startup.Configure method.  For a sample implementation, check out the demo in the ResponseCaching repository.

Response Compression Middleware

You can now add GZipCompression to the ASP.NET HTTP Pipeline if you would like ASP.NET to do your compression instead of a front-end web server.  This middleware is available in the Microsoft.AspNetCore.ResponseCompression package.  You can add simple GZipCompression using the fastest compression level with the following syntax in your Startup.cs class:

There are other options available for configuring compression, including the ability to specify custom compression providers.

WebListener Server for Windows

WebListener is a server that runs directly on top of the Windows Http Server API. WebListener gives you the option to take advantage of Windows specific features, like support for Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS (Windows 10), direct file transmission, and response caching WebSockets (Windows 8). On Windows you can use this server instead of Kestrel by referencing the Microsoft.AspNetCore.Server.WebListener package instead of the Kestrel package and configuring your WebHostBuilder to use Weblistener instead of Kestrel:

You can find other samples demonstrating the use of WebListener in its GitHub repository.

Unlike the other packages that are part of this release, WebListener is being shipped as both 1.0.0 and 1.1.0-preview. The 1.0.0 version of the package can be used in production LTS (1.0.1) ASP.NET Core applications. The 1.1.0-preview version of the package is a pre-release of the next version of WebListener as part of the 1.1.0 release.

View Components as Tag Helpers

You can now invoke View Components from your views using Tag Helper syntax and get all the benefits of IntelliSense and Tag Helper tooling in Visual Studio. Previously, to invoke a View Component from a view you would use the Component.InvokeAsync method and pass in any View Component arguments using an anonymous object:

@await Component.InvokeAsync("Copyright", new { website = "example.com", year = 2016 })

Instead, you can now invoke a View Component like you would any Tag Helper while getting Intellisense for the View Component parameters:

TagHelper in Visual Studio

To enable invoking your View Components as Tag Helpers simply add your View Components as Tag Helpers using the @addTagHelpers directive:

@addTagHelper "*, WebApplication1"

Middleware as MVC filters

Middleware typically sits in the global request handling pipeline. But what if you want to apply middleware to only a specific controller or action? You can now apply middleware as an MVC resource filter using the new MiddlewareFilterAttribute.  For example, you could apply response compression or caching to a specific action, or you might use a route value based request culture provider to establish the current culture for the request using the localization middleware.

To use middleware as a filter you first create a type with a Configure method that specifies the middleware pipeline that you want to use:

You then apply that middleware pipeline to a controller, an action or globally using the MiddlewareFilterAttribute:

Cookie-based TempData provider

As an alternative to using Session state for storing TempData you can now use the new cookie-based TempData provider. The cookie-based TempData provider will persist all TempData in a cookie and remove the need to manage any server-side session state.

To use the cookie-based TempData provider you register the CookieTempDataProvider service in your ConfigureServices method after adding the MVC services as follows:

services.AddMvc();
services.AddSingleton<ITempDataProvider, CookieTempDataProvider>();

View compilation

While razor syntax for views provides a flexible development experience that doesn’t require a compiler, there are some scenarios where you do not want the razor syntax interpreted at runtime.  You can now precompile the Razor views that your application references and deploy them with your application.  You can add the view compiler to your application in the “tools” section of project.json with the package reference “Microsoft.AspNetCore.Mvc.Razor.Precompilation.Tools”.  After running a package restore, you can then execute “dotnet razor-precompile” to precompile the razor views in your application.

Azure App Service logging provider

The Microsoft.AspNetCore.AzureAppServicesIntegration package allows your application to take advantage of App Service specific logging and diagnostics. Any log messages that are written using the ILogger/ILoggerFactory abstractions will go to the locations configured in the Diagnostics Logs section of your App Service configuration in the portal (see screenshot).

Usage:

Add a reference to the Microsoft.AspNetCore.AzureAppServicesIntegration package and call the UseAzureAppServices method in your Program.cs.

NOTE: UseIISIntegration is not in the above example because UseAzureAppServices includes it for you, it shouldn’t hurt your application if you have both calls, but explicitly calling UseIISIntegration is not required.

Once you have added the UseAzureAppServices method then your application will honor the settings in the Diagnostics Logs section of the Azure App Service settings as shown below. If you change these settings, switching from file system to blob storage logs for example, your application will automatically switch to logging to the new location without you redeploying.

Azure Key Vault configuration provider

The Microsoft.Extensions.Configuration.AzureKeyVault package provides a configuration provider for Azure Key Vault. This allows you to retrieve configuration from Key Vault secrets on application start and hold it in memory, using the normal ASP.NET Core configuration abstractions to access the configuration data.

Basic usage of the provider is done like this:

For an example on how to add the Key Vault configuration provider see the sample here: https://github.com/aspnet/Configuration/tree/dev/samples/KeyVaultSample

Redis and Azure Storage Data Protection Key Repositories

The Microsoft.AspNetCore.DataProtection.AzureStorage and Microsoft.AspNetCore.DataProtection.Redis packages allow storing your Data Protection keys in Azure Storage or Redis respectively. This allows keys to be shared across several instances of a website so that you can, for example, share an authentication cookie, or CSRF protection across many load balanced servers running your ASP.NET Core application. As data protection is used behind the scenes for a few things in MVC it’s extremely probable once you start scaling out you will need to share the keyring. Your options for sharing keys before these two packages would be to use a network share with a file based key repository.

Examples:

Azure:

services.AddDataProtection()
  .AddAzureStorage(“<blob URI including SAS token>”);

Redis:

// Connect
var redis = ConnectionMultiplexer.Connect("localhost:6379");

// Configure
services.AddDataProtection()
  .PersistKeysToRedis(redis, "DataProtection-Keys");

NOTE: When using a non-persistent Redis instance then anything that is encrypted using Data Protection will not be able to be decrypted once the instance resets. For the default authentication flows this would usually just mean that users are redirected to login again. However, for anything manually encrypted with Data Protections Protect method you will not be able to decrypt the data at all. For this reason, you should not use a Redis instance that isn’t persistent when manually using the Protect method of Data Protection. Data Protection is optimized for ephemeral data.

Summary

Thank you for trying out ASP.NET Core 1.1 Preview 1! If you have any problems, we will be monitoring the GitHub issues for the ASP.NET Core repositories. We hope you enjoy these new features and improvements!


Bearer Token Authentication in ASP.NET Core

$
0
0

This is a guest post from Mike Rousos

Introduction

ASP.NET Core Identity automatically supports cookie authentication. It is also straightforward to support authentication by external providers using the Google, Facebook, or Twitter ASP.NET Core authentication packages. One authentication scenario that requires a little bit more work, though, is to authenticate via bearer tokens. I recently worked with a customer who was interested in using JWT bearer tokens for authentication in mobile apps that worked with an ASP.NET Core back-end. Because some of their customers don’t have reliable internet connections, they also wanted to be able to validate the tokens without having to communicate with the issuing server.

In this article, I offer a quick look at how to issue JWT bearer tokens in ASP.NET Core. In subsequent posts, I’ll show how those same tokens can be used for authentication and authorization (even without access to the authentication server or the identity data store).

Offline Token Validation Considerations

First, here’s a quick diagram of the desired architecture.authentication-architecture

 

The customer has a local server with business information which will need to be accessed and updated periodically by client devices. Rather than store user names and hashed passwords locally, the customer prefers to use a common authentication micro-service which is hosted in Azure and used in many scenarios beyond just this specific one. This particular scenario is interesting, though, because the connection between the customer’s location (where the server and clients reside) and the internet is not reliable. Therefore, they would like a user to be able to authenticate at some point in the morning when the connection is up and have a token that will be valid throughout that user’s work shift. The local server, therefore, needs to be able to validate the token without access to the Azure authentication service.

This local validation is easily accomplished with JWT tokens. A JWT token typically contains a body with information about the authenticated user (subject identifier, claims, etc.), the issuer of the token, the audience (recipient) the token is intended for, and an expiration time (after which the token is invalid). The token also contains a cryptographic signature as detailed in RFC 7518. This signature is generated by a private key known only to the authentication server, but can be validated by anyone in possession of the corresponding public key. One JWT validation work flow (used by AD and some identity providers) involves requesting the public key from the issuing server and using it to validate the token’s signature. In our offline scenario, though, the local server can be prepared with the necessary public key ahead of time. The challenge with this architecture is that the local server will need to be given an updated public key anytime the private key used by the cloud service changes, but this inconvenience means that no internet connection is needed at the time the JWT tokens are validated.

Issuing Authentication Tokens

As mentioned previously, Microsoft.AspNetCore.* libraries don’t have support for issuing JWT tokens. There are, however, several other good options available.

First, Azure Active Directory Authentication provides identity and authentication as a service. Using Azure AD is a quick way to get identity in an ASP.NET Core app without having to write authentication server code.

Alternatively, if a developer wishes to write the authentication service themselves, there are a couple third-party libraries available to handle this scenario. IdentityServer4 is a flexible OpenID Connect framework for ASP.NET Core. Another good option is OpenIddict. Like IdentityServer4, OpenIddict offers OpenID Connect server functionality for ASP.NET Core. Both OpenIddict and IdentityServer4 work well with ASP.NET Identity 3.

For this demo, I will use OpenIddict. There is excellent documentation on accomplishing the same tasks with IdentityServer4 available in the IdentityServer4 documentation, which I would encourage you to take a look at, as well.

A Disclaimer

Please note that both IdentityServer4 and OpenIddict are pre-release packages currently. OpenIddict is currently released as a beta and IdentityServer4 as an RC, so both are still in development and subject to change!

Setup the User Store

In this scenario, we will use a common ASP.NET Identity 3-based user store, accessed via Entity Framework Core. Because this is a common scenario, setting it up is as easy as creating a new ASP.NET Core web app from new project templates and selecting ‘individual user accounts’ for the authentication mode.new-web-app

 

This template will provide a default ApplicationUser type and Entity Framework Core connections to manage users. The connection string in appsettings.json can be modifier to point at the database where you want this data stored.

Because JWT tokens can encapsulate claims, it’s interesting to include some claims for users other than just the defaults of user name or email address. For demo purposes, let’s include two different types of claims.

Adding Roles

ASP.NET Identity 3 includes the concept of roles. To take advantage of this, we need to create some roles which users can be assigned to. In a real application, this would likely be done by managing roles through a web interface. For this short sample, though, I just seeded the database with sample roles by adding this code to startup.cs:

// Initialize some test roles. In the real world, these would be setup explicitly by a role manager
private string[] roles = new[] { "User", "Manager", "Administrator" };
private async Task InitializeRoles(RoleManager<IdentityRole> roleManager)
{
    foreach (var role in roles)
    {
        if (!await roleManager.RoleExistsAsync(role))
        {
            var newRole = new IdentityRole(role);
            await roleManager.CreateAsync(newRole);
            // In the real world, there might be claims associated with roles
            // _roleManager.AddClaimAsync(newRole, new )
        }
    }
}

I then call InitializeRoles from my app’s Startup.Configure method. The RoleManager needed as a parameter to InitializeRoles can be retrieved by IoC (just add a RoleManager parameter to your Startup.Configure method).

Because roles are already part of ASP.NET Identity, there’s no need to modify models or our database schema.

Adding Custom Claims to the Data Model

It’s also possible to encode completely custom claims in JWT tokens. To demonstrate that, I added an extra property to my ApplicationUser type. For sample purposes, I added an integer called OfficeNumber:

public virtual int OfficeNumber { get; set; }

This is not something that would likely be a useful claim in the real world, but I added it in my sample specifically because it’s not the sort of claim that’s already handled by any of the frameworks we’re using.

I also updated the view models and controllers associated with creating a new user to allow specifying role and office number when creating new users.

I added the following properties to the RegisterViewModel type:

[Display(Name = "Is administrator")]
public bool IsAdministrator { get; set; }

[Display(Name = "Is manager")]
public bool IsManager { get; set; }

[Required]
[Display(Name = "Office Number")]
public int OfficeNumber { get; set; }

I also added cshtml for gathering this information to the registration view:

<div class="form-group">
    <label asp-for="OfficeNumber" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="OfficeNumber" class="form-control" />
        <span asp-validation-for="OfficeNumber" class="text-danger"></span>
    </div>
</div>
<div class="form-group">
    <label asp-for="IsManager" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="IsManager" class="form-control" />
        <span asp-validation-for="IsManager" class="text-danger"></span>
    </div>
</div>
<div class="form-group">
    <label asp-for="IsAdministrator" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="IsAdministrator" class="form-control" />
        <span asp-validation-for="IsAdministrator" class="text-danger"></span>
    </div>
</div>

Finally, I updated the AccountController.Register action to set role and office number information when creating users in the database. Notice that we add a custom claim for the office number. This takes advantage of ASP.NET Identity’s custom claim tracking. Be aware that ASP.NET Identity doesn’t store claim value types, so even in cases where the claim is always an integer (as in this example), it will be stored and returned as a string. Later in this post, I explain how non-string claims can be included in JWT tokens.

var user = new ApplicationUser { UserName = model.Email, Email = model.Email, OfficeNumber = model.OfficeNumber };
var result = await _userManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
    if (model.IsAdministrator)
    {
        await _userManager.AddToRoleAsync(user, "Administrator");
    }
    else if (model.IsManager)
    {
        await _userManager.AddToRoleAsync(user, "Manager");
    }

    var officeClaim = new Claim("office", user.OfficeNumber.ToString(), ClaimValueTypes.Integer);
    await _userManager.AddClaimAsync(user, officeClaim);
    ...

Updating the Database Schema

After making these changes, we can use Entity Framework’s migration tooling to easily update the database to match (the only change to the database should be to add an OfficeNumber column to the users table). To migrate, simply run dotnet ef migrations add OfficeNumberMigration and dotnet ef database update from the command line.

At this point, the authentication server should allow registering new users. If you’re following along in code, go ahead and add some sample users at this point.

Issuing Tokens with OpenIddict

The OpenIddict package is still pre-release, so it’s not yet available on NuGet.org. Instead, the package is available on the aspnet-contrib MyGet feed.

To restore it, we need to add that feed to our solution’s NuGet.config. If you don’t yet have a NuGet.config file in your solution, you can add one that looks like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
    <add key="aspnet-contrib" value="https://www.myget.org/F/aspnet-contrib/api/v3/index.json" />
  </packageSources>
</configuration>

Once that’s done, add a reference to "OpenIddict": "1.0.0-beta1-*" and "OpenIddict.Mvc": "1.0.0-beta1-*" in your project.json file’s dependencies section. OpenIddict.Mvc contains some helpful extensions that allow OpenIddict to automatically bind OpenID Connect requests to MVC action parameters.

There are only a few steps needed to enable OpenIddict endpoints.

Use OpenIddict Model Types

The first change is to update your ApplicationDBContext model type to inherit from OpenIddictDbContext instead of IdentityDbContext.

After making this change, migrate the database to update it, as well (dotnet ef migrations add OpenIddictMigration and dotnet ef database update).

Configure OpenIddict

Next, it’s necessary to register OpenIddict types in our ConfigureServices method in our Startup type. This can be done with a call like this:

services.AddOpenIddict<ApplicationDbContext>()
    .AddMvcBinders()
    .EnableTokenEndpoint("/connect/token")
    .UseJsonWebTokens()
    .AllowPasswordFlow()
    .AddSigningCertificate(jwtSigningCert);

The specific methods called on the OpenIddictBuilder here are important to understand.

  • AddMvcBinders. This method registers custom model binders that will populate OpenIdConnectRequest parameters in MVC actions with OpenID Connect requests read from incoming HTTP request’s context. This isn’t required, since the OpenID Connect requests can be read manually, but it’s a helpful convenience.
  • EnableTokenEndpoint. This method allows you to specify the endpoint which will be serving authentication tokens. The endpoint shown above (/connect/token) is a pretty common default endpoint for token issuance. OpenIddict needs to know the location of this endpoint so that it can be included in responses when a client queries for information about how to connect (using the .well-known/openid-configuration endpoint, which OpenIddict automatically provides). OpenIddict will also validate requests to this endpoint to be sure they are valid OpenID Connect requests. If a request is invalid (if it’s missing mandatory parameters like grant_type, for example), then OpenIddict will reject the request before it even reaches the app’s controllers.
  • UseJsonWebTokens. This instructs OpenIddict to use JWT as the format for bearer tokens it produces.
  • AllowPasswordFlow. This enables the password grant type when logging on a user. The different OpenID Connect authorization flows are documented in RFC and OpenID Connect specs. The password flow means that client authorization is performed based on user credentials (name and password) which are provided from the client. This is the flow that best matches our sample scenario.
  • AddSigningCertificate. This API specifies the certificate which should be used to sign JWT tokens. In my sample code, I produce the jwtSigningCert argument from a pfx file on disk (var jwtSigningCert = new X509Certificate2(certLocation, certPassword);). In a real-world scenario, the certificate would more likely be loaded from the authentication server’s certificate store, in which case a different overload of AddSigningCertificate would be used (one which takes the cert’s thumbprint and store name/location).
    • If you need a self-signed certificate for testing purposes, one can be produced with the makecert and pvk2pfx command line tools (which should be on the path in a Visual Studio Developer Command prompt).
      • makecert -n "CN=AuthSample" -a sha256 -sv AuthSample.pvk -r AuthSample.cer This will create a new self-signed test certificate with its public key in AuthSample.cer and it’s private key in AuthSample.pvk.
      • pvk2pfx -pvk AuthSample.pvk -spc AuthSample.cer -pfx AuthSample.pfx -pi [A password] This will combine the pvk and cer files into a single pfx file containing both the public and private keys for the certificate (protected by a password).
      • This pfx file is what needs to be loaded by OpenIddict (since the private key is necessary to sign tokens). Note that this private key (and any files containing it) must be kept secure.
  • DisableHttpsRequirement. The code snippet above doesn’t include a call to DisableHttpsRequirement(), but such a call may be useful during testing to disable the requirement that authentication calls be made over HTTPS. Of course, this should never be used outside of testing as it would allow authentication tokens to be observed in transit and, therefore, enable malicious parties to impersonate legitimate users.

Enable OpenIddict Endpoints

Once AddOpenIddict has been used to configure OpenIddict services, a call to app.UseOpenIddict(); (which should come after the existing call to UseIdentity) should be added to Startup.Configure to actually enable OpenIddict in the app’s HTTP request processing pipeline.

Implementing the Connect/Token Endpoint

The final step necessary to enable the authentication server is to implement the connect/token endpoint. The EnableTokenEndpoint call made during OpenIddict configuration indicates where the token-issuing endpoint will be (and allows OpenIddict to validate incoming OIDC requests), but the endpoint still needs to be implemented.

OpenIddict’s owner, Kévin Chalet, gives a good example of how to implement a token endpoint supporting a password flow in this sample. I’ve restated the gist of how to create a simple token endpoint here.

First, create a new controller called ConnectController and give it a Token post action. Of course, the specific names are not important, but it is important that the route matches the one given to EnableTokenEndpoint.

Give the action method an OpenIdConnectRequest parameter. Because we are using the OpenIddict MVC binder, this parameter will be supplied by OpenIddict. Alternatively (without using the OpenIddict model binder), the GetOpenIdConnectRequest extension method could be used to retrieve the OpenID Connect request.

Based on the contents of the request, you should validate that the request is valid.

  1. Confirm that the grant type is as expected (‘Password’ for this authentication server).
  2. Confirm that the requested user exists (using the ASP.NET Identity UserManager).
  3. Confirm that the requested user is able to sign in (since ASP.NET Identity allows for accounts that are locked or not yet confirmed).
  4. Confirm that the password provided is correct (again, using a UserManager).

If everything in the request checks out, then a ClaimsPrincipal can be created using SignInManager.CreateUserPrincipalAsync.

Roles and custom claims known to ASP.NET identity will automatically be present in the ClaimsPrincipal. If any changes are needed to the claims, those can be made now.

One set of claims updates that will be important is to attach destinations to claims. A claim is only included in a token if that claim includes a destination for that token type. So, even though the ClaimsPrincipal will contain all ASP.NET Identity claims, they will only be included in tokens if they have appropriate destinations. This allows some claims to be kept private and others to be included only in particular token types (access or identity tokens) or if particular scopes are requested. For the purposes of this simple demo, I am including all claims for all token types.

This is also an opportunity to add additional custom claims to the ClaimsPrincipal. Typically, tracking the claims with ASP.NET Identity is sufficient but, as mentioned earlier, ASP.NET Identity does not remember claim value types. So, if it was important that the office claim be an integer (rather than a string), we could instead add it here based on data in the ApplicationUser object returned from the UserManager. Claims cannot be added to a ClaimsPrincipal directly, but the underlying identity can be retrieved and modified. For example, if the office claim was created here (instead of at user registration), it could be added like this:

var identity = (ClaimsIdentity)principal.Identity;
var officeClaim = new Claim("office", user.OfficeNumber.ToString(), ClaimValueTypes.Integer);
officeClaim.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken, OpenIdConnectConstants.Destinations.IdentityToken);
identity.AddClaim(officeClaim);

Finally, an AuthenticationTicket can be created from the claims principal and used to sign in the user. The ticket object allows us to use helpful OpenID Connect extension methods to specify scopes and resources to be granted access. In my sample, I pass the requested scopes filtered by those the server is able to provide. For resources, I provide a hard-coded string indicating the resource this token should be used to access. In more complex scenarios, the requested resources (request.GetResources()) might be considered when determining which resource claims to include in the ticket. Note that resources (which map to the audience element of a JWT) are not mandatory according to the JWT specification, though many JWT consumers expect them.

Put all together, here’s a simple implementation of a connect/token endpoint:

[HttpPost]
public async Task<IActionResult> Token(OpenIdConnectRequest request)
{
    if (!request.IsPasswordGrantType())
    {
        // Return bad request if the request is not for password grant type
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.UnsupportedGrantType,
            ErrorDescription = "The specified grant type is not supported."
        });
    }

    var user = await _userManager.FindByNameAsync(request.Username);
    if (user == null)
    {
        // Return bad request if the user doesn't exist
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "Invalid username or password"
        });
    }

    // Check that the user can sign in and is not locked out.
    // If two-factor authentication is supported, it would also be appropriate to check that 2FA is enabled for the user
    if (!await _signInManager.CanSignInAsync(user) || (_userManager.SupportsUserLockout && await _userManager.IsLockedOutAsync(user)))
    {
        // Return bad request is the user can't sign in
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "The specified user cannot sign in."
        });
    }

    if (!await _userManager.CheckPasswordAsync(user, request.Password))
    {
        // Return bad request if the password is invalid
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "Invalid username or password"
        });
    }

    // The user is now validated, so reset lockout counts, if necessary
    if (_userManager.SupportsUserLockout)
    {
        await _userManager.ResetAccessFailedCountAsync(user);
    }

    // Create the principal
    var principal = await _signInManager.CreateUserPrincipalAsync(user);

    // Claims will not be associated with specific destinations by default, so we must indicate whether they should
    // be included or not in access and identity tokens.
    foreach (var claim in principal.Claims)
    {
        // For this sample, just include all claims in all token types.
        // In reality, claims' destinations would probably differ by token type and depending on the scopes requested.
        claim.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken, OpenIdConnectConstants.Destinations.IdentityToken);
    }

    // Create a new authentication ticket for the user's principal
    var ticket = new AuthenticationTicket(
        principal,
        new AuthenticationProperties(),
        OpenIdConnectServerDefaults.AuthenticationScheme);

    // Include resources and scopes, as appropriate
    var scope = new[]
    {
        OpenIdConnectConstants.Scopes.OpenId,
        OpenIdConnectConstants.Scopes.Email,
        OpenIdConnectConstants.Scopes.Profile,
        OpenIdConnectConstants.Scopes.OfflineAccess,
        OpenIddictConstants.Scopes.Roles
    }.Intersect(request.GetScopes());

    ticket.SetResources("http://localhost:5000/");
    ticket.SetScopes(scope);

    // Sign in the user
    return SignIn(ticket.Principal, ticket.Properties, ticket.AuthenticationScheme);
}

Testing the Authentication Server

At this point, our simple authentication server is done and should work to issue JWT bearer tokens for the users in our database.

OpenIddict implements OpenID Connect, so our sample should support a standard /.well-known/openid-configuration endpoint with information about how to authenticate with the server.

If you’ve followed along building the sample, launch the app and navigate to that endpoint. You should get a json response similar to this:

{
  "issuer": "http://localhost:5000/",
  "jwks_uri": "http://localhost:5000/.well-known/jwks",
  "token_endpoint": "http://localhost:5000/connect/token",
  "code_challenge_methods_supported": [ "S256" ],
  "grant_types_supported": [ "password" ],
  "subject_types_supported": [ "public" ],
  "scopes_supported": [ "openid", "profile", "email", "phone", "roles" ],
  "id_token_signing_alg_values_supported": [ "RS256" ]
}

This gives clients information about our authentication server. Some of the interesting values include:

  • The jwks_uri property is the endpoint that clients can use to retrieve public keys for validating token signatures from the issuer.
  • token_endpoint gives the endpoint that should be used for authentication requests.
  • The grant_types_supported property is a list of the grant types supported by the server. In the case of this sample, that is only password.
  • scopes_supported is a list of the scopes that a client can request access to.

If you’d like to check that the correct certificate is being used, you can navigate to the jwks_uri endpoint to see the public keys used by the server. The x5t property of the response should be the certificate thumbprint. You can check this against the thumbprint of the certificate you expect to be using to confirm that they’re the same.

Finally, we can test the authentication server by attempting to login! This is done via a POST to the token_endpoint. You can use a tool like Postman to put together a test request. The address for the post should be the token_endpoint URI and the body of the post should be x-www-form-urlencoded and include the following items:

  • grant_type must be ‘password’ for this scenario.
  • username should be the username to login.
  • password should be the user’s password.
  • scope should be the scopes that access is desired for.
  • resource is an optional parameter which can specify the resource the token is meant to access. Using this can help to make sure that a token issued to access one resource isn’t reused to access a different one.

Here are the complete request and response from me testing the connect/token API:

Request

POST /connect/token HTTP/1.1
Host: localhost:5000
Cache-Control: no-cache
Postman-Token: f1bb8681-a963-2282-bc94-03fdaea5da78
Content-Type: application/x-www-form-urlencoded

grant_type=password&username=Mike%40Fabrikam.com&password=MikePassword1!&scope=openid+email+name+profile+roles

Response

{
  "token_type": "Bearer",
  "access_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IkU1N0RBRTRBMzU5NDhGODhBQTg2NThFQkExMUZFOUIxMkI5Qzk5NjIiLCJ0eXAiOiJKV1QifQ.eyJ1bmlxdWVfbmFtZSI6Ik1pa2VAQ29udG9zby5jb20iLCJBc3BOZXQuSWRlbnRpdHkuU2VjdXJpdHlTdGFtcCI6ImMzM2U4NzQ5LTEyODAtNGQ5OS05OTMxLTI1Mzk1MzY3NDEzMiIsInJvbGUiOiJBZG1pbmlzdHJhdG9yIiwib2ZmaWNlIjoiMzAwIiwianRpIjoiY2UwOWVlMGUtNWQxMi00NmUyLWJhZGUtMjUyYTZhMGY3YTBlIiwidXNhZ2UiOiJhY2Nlc3NfdG9rZW4iLCJzY29wZSI6WyJlbWFpbCIsInByb2ZpbGUiLCJyb2xlcyJdLCJzdWIiOiJjMDM1YmU5OS0yMjQ3LTQ3NjktOWRjZC01NGJkYWRlZWY2MDEiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwMDEvIiwibmJmIjoxNDc2OTk3MDI5LCJleHAiOjE0NzY5OTg4MjksImlhdCI6MTQ3Njk5NzAyOSwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo1MDAwLyJ9.q-c6Ld1b7c77td8B-0LcppUbL4a8JvObiru4FDQWrJ_DZ4_zKn6_0ud7BSijj4CV3d3tseEM-3WHgxjjz0e8aa4Axm55y4Utf6kkjGjuYyen7bl9TpeObnG81ied9NFJTy5HGYW4ysq4DkB2IEOFu4031rcQsUonM1chtz14mr3wWHohCi7NJY0APVPnCoc6ae4bivqxcYxbXlTN4p6bfBQhr71kZzP0AU_BlGHJ1N8k4GpijHVz2lT-2ahYaVSvaWtqjlqLfM_8uphNH3V7T7smaMpomQvA6u-CTZNJOZKalx99GNL4JwGk13MlikdaMFXhcPiamhnKtfQEsoNauA",
  "expires_in": 1800
}

The access_token is the JWT and is nothing more than a base64-encoded string in three parts ([header].[body].[signature]). A number of websites offer JWT decoding functionality.

The access token above has these contents:

{
  "alg": "RS256",
  "kid": "E57DAE4A35948F88AA8658EBA11FE9B12B9C9962",
  "typ": "JWT"
}.
{
  "unique_name": "Mike@Contoso.com",
  "AspNet.Identity.SecurityStamp": "c33e8749-1280-4d99-9931-253953674132",
  "role": "Administrator",
  "office": "300",
  "jti": "ce09ee0e-5d12-46e2-bade-252a6a0f7a0e",
  "usage": "access_token",
  "scope": [
    "email",
    "profile",
    "roles"
  ],
  "sub": "c035be99-2247-4769-9dcd-54bdadeef601",
  "aud": "http://localhost:5001/",
  "nbf": 1476997029,
  "exp": 1476998829,
  "iat": 1476997029,
  "iss": "http://localhost:5000/"
}.
[signature]

Important fields in the token include:

  • kid is the key ID that can be use to look-up the key needed to validate the token’s signature.
    • x5t, similarly, is the signing certificate’s thumbprint.
  • role and office capture our custom claims.
  • exp is a timestamp for when the token should expire and no longer be considered valid.
  • iss is the issuing server’s address.

These fields can be used to validate the token.

Conclusion and Next Steps

Hopefully this article has provided a useful overview of how ASP.NET Core apps can issue JWT bearer tokens. The in-box abilities to authenticate with cookies or third-party social providers are sufficient for many scenarios, but in other cases (especially when supporting mobile clients), bearer authentication is more convenient.

Look for a follow-up to this post coming soon covering how to validate the token in ASP.NET Core so that it can be used to authenticate and signon a user automatically. And in keeping with the original scenario I ran into with a customer, we’ll make sure the validation can all be done without access to the authentication server or identity database.

Resources

Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM

$
0
0

We are happy to announce that ASP.NET Core 1.1 is now available as a stable release on nuget.org! This release includes a bunch of great new features along with many bug fixes and general enhancements. We invite you to try out the new features and to provide feedback.

To update an existing project to ASP.NET Core 1.1 you will need to do the following:

  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

    Two places to update project.json to .NET Core 1.1

  3. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Package list in NuGet package manager UI in Visual Studio

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1

Side by side install

NOTE: By installing the new SDK, you will update the default behavior of the dotnet command. It will use msbuild and process csproj projects instead of project.json. Similarly, dotnet new will create a csproj profile file.

In order to continue using the earlier project.json-based tools on a per-project basis, create a global.json file in your project directory and add the “sdk” property to it. The following example shows a global.json that contrains dotnet to using the project.json-based tools:


{
    "sdk": {
        "version": "1.0.0-preview2-003131"
    }
}

Performance

We are very pleased to announce the participation of ASP.NET Core with the Kestrel webserver in the round 13 TechEmpower benchmarks.  The TechEmpower standard benchmarks are known for their thorough testing of the many web frameworks that are available.  In the latest results from TechEmpower, ASP.NET Core 1.1 with Kestrel was ranked as the fastest mainstream fullstack web framework in the plaintext test.

TechEmpower also reports that the performance of ASP.NET Core running on Linux is approximately 760 times faster than it was one year ago.  Since TechEmpower started measuring benchmarks in March 2013, they have never seen such a performance improvement as they have observed in ASP.NET Core over the last year.

You can read more about the TechEmpower benchmarks and their latest results on the TechEmpower website.

New Web Development Features in Visual Studio 2017

including:

  • The new JavaScript editor
  • Embedded ESLint capabilities that help check for common mistakes in your script
  • JavaScript debugging support for browser
  • Updated BrowserLink features for two-way communication between your browsers and Visual Studio while debugging

Support for ASP.NET Core in Visual Studio for Mac

Visual Studio for MacWe’re pleased to announce the first preview of ASP.NET Core tooling in Visual Studio for Mac.  For those familiar with Visual Studio, you’ll find many of the same capabilities you’d expect from a Visual Studio development environment.  IntelliSense and refactoring capabilities are built on top of Roslyn, and it shares much of Visual Studio’s .NET Core debugger.

This first preview focuses on developing Web API applications.  You’ll have a great experience creating a new project, and working with C# files, and TextMate bundle support for web file types (e.g HTML, JavaScript, JSON, .cshtml). In a future update, we’ll be adding the same first class support for these file types we have in Visual Studio which will bring IntelliSense for all of those as well.

What’s new in the ASP.NET 1.1?

This release was designed around the following feature themes in order to help developers:

  • Improved and cross-platform compatible site hosting capabilities when using a host other than Windows Internet Information Server (IIS).
  • Support for developing with native Windows capabilities
  • Compatibility, portability and performance of middleware and other MVC features throughout the UI framework
  • Improved deployment and management experience of ASP.NET Core applications on Microsoft Azure. We think these improvements help make ASP.NET Core the best choice for developing an application for the cloud.

For additional details on the changes included in this release please check out the release notes.

URL Rewriting Middleware

We are bringing URL rewriting functionality to ASP.NET Core through a middleware component that can be configured using IIS standard XML formatted rules, Apache Mod_Rewrite syntax, or some simple C# methods coded into your application.  When you want to run your ASP.NET Core application outside of IIS, we want to enable those same rich URL rewriting capabilities regardless of the web host you are using.  If you are using containers, Apache, or nginx you will be able to have ASP.NET Core manage this capability for you with a uniform syntax that you are familiar with.

URL Rewriting allows mapping a public URL space, designed for consumption of your clients, to whatever representation the downstream components of your middleware pipeline require as well as redirecting clients to different URLs based on a pattern.

For example, you could ensure a canonical hostname by rewriting any requests to http://example.com to instead be http://www.example.com for everything after the re-write rules have run. Another example is to redirect all requests to http://example.com to https://example.com. You can even configure URL rewrite such that both rules are applied and all requests to example.com are always redirected to SSL and rewritten to www.

We can get started with this middleware by adding a reference to our web application for the Microsoft.AspNetCore.Rewrite package.  This allows us to add a call to configure RewriteOptions in our Startup.Configure method for our rewriter:

As you can see, we can both force a rewrite and redirect with different rules.

  • Url Redirect sends an HTTP 301 Moved Permanently status code to the client with the new address
  • Url Rewrite gives a different URL to the next steps in the HTTP pipeline, tricking it into thinking a different address was requested.

Response Caching Middleware

Response Caching similar to the OutputCache capabilities of previous ASP.NET releases can now be activated in your application by adding the Microsoft.AspNetCore.ResponseCaching and the Microsoft.Extensions.Caching.Memory packages to your application.  You can add this middleware to your application in the Startup.ConfigureServices method and configure the response caching from the Startup.Configure method.  For a sample implementation, check out the demo in the ResponseCaching repository.

You can now add GZipCompression to the ASP.NET HTTP Pipeline if you would like ASP.NET to do your compression instead of a front-end web server.  IIS would have normally handled this for you, but in environments where your host does not provide compression capabilities, ASP.NET Core can do this for you.  We think this is a great practice that everyone should use in their server-side applications to deliver smaller data that transmits faster over the network.

This middleware is available in the Microsoft.AspNetCore.ResponseCompression package.  You can add simple GZipCompression using the fastest compression level with the following syntax in your Startup.cs class:

There are other options available for configuring compression, including the ability to specify custom compression providers.

WebListener Server for Windows

WebListener is a server that runs directly on top of the Windows Http Server API. WebListener gives you the option to take advantage of Windows specific features, like support for Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS (Windows 10), direct file transmission, and response caching WebSockets (Windows 8).  This may be advantageous for you if you want to bundle an ASP.NET Core microservice in a Windows container that takes advantage of these Windows features.

On Windows you can use this server instead of Kestrel by referencing the Microsoft.AspNetCore.Server.WebListener package instead of the Kestrel package and configuring your WebHostBuilder to use Weblistener instead of Kestrel:

You can find other samples demonstrating the use of WebListener in its GitHub repository.

Unlike the other packages that are part of this release, WebListener is being shipped as both 1.0.0 and 1.1.0. The 1.0.0 version of the package can be used in production LTS (1.0.1) ASP.NET Core applications. The 1.1.0 version of the package is the next version of WebListener as part of the 1.1.0 release.

View Components as Tag Helpers

ViewComponents are an ASP.NET Core display concept that provides for a razor view that is triggered from a server-side class that inherits from the ViewComponent base class.  You can now invoke from your views using Tag Helper syntax and get all the benefits of IntelliSense and Tag Helper tooling in Visual Studio. Previously, to invoke a View Component from a view you would use the Component.InvokeAsync method and pass in any View Component arguments using an anonymous object:

 @await Component.InvokeAsync("Copyright", new { website = "example.com", year = 2016 })

Instead, you can now invoke a View Component like you would any Tag Helper while getting Intellisense for the View Component parameters:

TagHelper in Visual Studio

This gives us the same rich intellisense and editor support in the razor template editor that we have for TagHelpers.  With the Component.Invoke syntax, there is no obvious way to add CSS classes or get tooltips to assist in configuring the component like we have with the TagHelper feature.  Finally, this keeps us in “HTML Editing” mode and allows a developer to avoid shifting into C# in order to reference a ViewComponent they want to add to a page.

To enable invoking your View Components as Tag Helpers simply add your View Components as Tag Helpers using the @addTagHelpers directive:

@addTagHelper "*, WebApplication1"

Middleware as MVC filters

Middleware typically sits in the global request handling pipeline. But what if you want to apply middleware to only a specific controller or action? You can now apply middleware as an MVC resource filter using the new MiddlewareFilterAttribute.  For example, you could apply response compression or caching to a specific action, or you might use a route value based request culture provider to establish the current culture for the request using the localization middleware.

To use middleware as a filter you first create a type with a Configure method that specifies the middleware pipeline that you want to use:

You then apply that middleware pipeline to a controller, an action or globally using the MiddlewareFilterAttribute:

Cookie-based TempData provider

To use the cookie-based TempData provider you register the CookieTempDataProvider service in your ConfigureServices method after adding the MVC services as follows:

services.AddMvc();
services.AddSingleton<ITempDataProvider, CookieTempDataProvider>();

View compilation

The Razor syntax for views provides a flexible development experience where compilation of the views happens automatically at runtime when the view is executed. However, there are some scenarios where you do not want the Razor syntax compiled at runtime. You can now compile the Razor views that your application references and deploy them with your application.  To enable view compilation as part of publishing your application,

  1. Add a reference to “Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design” under the “dependencies” section.
  2. Add a reference to “Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools” under the tools section
  3. Add a postpublish script to invoke view compiler:
"scripts": {

   "postpublish": "dotnet razor-precompile --configuration %publish:Configuration% --framework %publish:TargetFramework% --output-path %publish:OutputPath% %publish:ProjectPath%"

}

Azure App Service logging provider

The Microsoft.AspNetCore.AzureAppServicesIntegration package allows your application to take advantage of App Service specific logging and diagnostics. Any log messages that are written using the ILogger/ILoggerFactory abstractions will go to the locations configured in the Diagnostics Logs section of your App Service configuration in the portal (see screenshot).  We highly recommend using this logging provider when deploying an application to Azure App Service.  Prior to this feature, it was very difficult to capture log files without a third party provider or hosted service.

Usage:

Add a reference to the Microsoft.AspNetCore.AzureAppServicesIntegration package and add the one line of code to UseAzureAppServices when configuring the WebHostBuilder in your Program.cs.

 
  var host = new WebHostBuilder()
    .UseKestrel()   
    .UseAzureAppServices()   
    .UseStartup<Startup>()   
    .Build();

NOTE: UseIISIntegration is not in the above example because UseAzureAppServices includes it for you, it shouldn’t hurt your application if you have both calls, but explicitly calling UseIISIntegration is not required.
Once you have added the UseAzureAppServices method then your application will honor the settings in the Diagnostics Logs section of the Azure App Service settings as shown below. If you change these settings, switching from file system to blob storage logs for example, your application will automatically switch to logging to the new location without you redeploying.

Azure Portal Configuration Options for Diagnostic Logging

Azure Key Vault configuration provider

Azure Key Vault is a service that can be used to store secret cryptographic keys and other secrets in a security hardened container on Azure.  You can set up your own Key Vault by following the Getting Started docs. The Microsoft.Extensions.Configuration.AzureKeyVault package then provides a configuration provider for your Azure Key Vault. This package allows you to retrieve configuration from Key Vault secrets on application start and hold it in memory, using the normal ASP.NET Core configuration abstractions to access the configuration data.

Basic usage of the provider is done like this:

For an example on how to add the Key Vault configuration provider see the sample here: https://github.com/aspnet/Configuration/tree/dev/samples/KeyVaultSample

Redis and Azure Storage Data Protection Key Repositories

The Microsoft.AspNetCore.DataProtection.AzureStorage and Microsoft.AspNetCore.DataProtection.Redis packages allow storing your Data Protection keys in Azure Storage or Redis respectively. This allows keys to be shared across several instances of a web application so that you can  share an authentication cookie, or CSRF protection across many load balanced servers running your ASP.NET Core application. As data protection is used behind the scenes for a few things in MVC it’s extremely probable once you start scaling out you will need to share the keyring. Your options for sharing keys before these two packages would be to use a network share with a file based key repository.

Examples:

Azure:

services.AddDataProtection() 
  .PersistKeysToAzureBlogStorage(new Uri(“<blob URI including SAS token>”));

Redis:

// Connect
var redis = ConnectionMultiplexer.Connect("localhost:6379"); 
// Configure
services.AddDataProtection() 
  .PersistKeysToRedis(redis, "DataProtection-Keys");

NOTE: When using a non-persistent Redis instance then anything that is encrypted using Data Protection will not be able to be decrypted once the instance resets. For the default authentication flows this would usually just mean that users are redirected to login again. However, for anything manually encrypted with Data Protections Protect method you will not be able to decrypt the data at all. For this reason, you should not use a Redis instance that isn’t persistent when manually using the Protect method of Data Protection. Data Protection is optimized for ephemeral data.

In our initial release of dependency injection capabilities with ASP.NET Core, we heard feedback that there was some friction in enabling 3rd party providers.   With this release, we are acting on that feedback and introducing a new IServiceProviderFactory interface to enable those 3rd party containers to be configured easily in ASP.NET Core applications.  This interface allows you to move the construction of the container to the WebHostBuilder and allow for further customization of the mappings of your container in a new ConfigureContainer method in the Startup class.

Developers of containers can find a sample demonstrating how to connect their favorite provider, including samples using Autofac and StructureMap on GitHub.

This additional configuration should allow developers to use their favorite containers by adding a line to the Main method of their application that reads as simple as “UseStructureMap()”

Summary

The ASP.NET Core 1.1 release improves significantly on the previous release of ASP.NET Core.  With improved tooling in the Visual Studio 2017 RC and new tooling in Visual Studio for Mac, we think you’ll find web development to be a delightful experience.  This is a fully supported release and we encourage you to use the new features to better support your applications.  You will find a list of known issues with workarounds on GitHub, should you run into trouble. We will continue to improve the fastest mainstream full-stack framework available on the web, and want you to join us.  Download Visual Studio 2017 RC and Visual Studio for Mac from https://visualstudio.com and get the latest .NET Core SDK from https://dot.net

Put a .NET Core App in a Container with the new Docker Tools for Visual Studio

$
0
0

By now hopefully you’ve heard the good news that we’ve added first class support for building and running .NET applications inside of Docker containers in Visual Studio 2017 RC.  Visual Studio 2017 and Docker support building and running .NET applications using Windows containers (on Windows 10/Server 2016 only), and .NET Core applications on Linux containers, including the ability to publish and run Linux containers on Microsoft’s Azure App Service.

Docker containers package an application with everything it needs to run: code, runtime, system tools, system libraries – anything you would install on a server.  Put simply, a container is an isolated place where an application can run without affecting the rest of the system, and without the system affecting the application. This makes them an ideal way to package and run applications in production environments, where historically constraints imposed by the production environment (e.g. which version of the .NET runtime the server is running) have dictated development decisions.  Additionally, Docker containers are very lightweight which enable scaling applications quickly by spinning up new instances.

In this post, I’ll focus on creating a .NET Core application, publishing it to the Azure App Service Linux Preview and setting up continuous build integration and delivery to the Azure Container Service.

Getting Started

To get started in Visual Studio 2017 you need to install the “.NET Core and Docker (Preview)” workload in the new Visual Studio 2017 installer

         

Once it finishes installing, you’ll need to install Docker for Windows (if you want to use Windows containers on Windows 10 or Server 2016 you’ll need the Beta channel and the Windows 10 Anniversary Edition, if you want Linux containers you can choose either the Stable or Beta channel installers).

After you’ve finished installing Docker, you’ll need to share a drive with it where your images will be built to and run from.  To do this:

  • Right click on the Docker system tray icon and choose settings
  • Choose the “Shared Drives” tab
  • Share the drive your images will run from (this is the same drive the Visual Studio project will live on)

2-driveconfig

Creating an application with Docker support

Now that Visual Studio and Docker are installed and configured properly let’s create a .NET Core application that we’ll run in a Linux container.

On the ASP.NET application dialog, there is a checkbox that allows us to add Docker support to the application as part of project creation.  For now, we’ll to skip this, so we can see how to add Docker support existing applications.

3-webapplicationNow that we have our basic web application, let’s add a quick code snippet to the “About” page that will show what operating system the application is running on

4-aboutNext, we’ll hit Ctrl + F5 to run it inside IIS Express, we can see we’re running on Windows as we would expect.

5-runnativelyNow, to add Docker support to the application, right click on the project in Solution Explorer, choose Add, and then “Docker Project Support” (use “Docker Solution Support” to create containers for multiple projects).

6-adddockersupportYou’ll see that the “Start” button has changed to say “Docker” and several Docker files have been added to the project.

7-additionaldockerfilesLet’s hit Ctrl+F5 again and we can see that the app is now running inside a Linux container locally.

8-runninginlinuxcontainer

Running the application in Azure

Now let’s publish the app to Microsoft Azure App Service, which now offers the ability to run Linux Docker containers in a preview form.

To do this, I’ll right click on the app and choose “Publish”.  This will open our brand new publish page.  Click the “Profiles” dropdown and select “New Profile”, and then choose “Azure App Service Linux (Preview)” and click “OK”

9-publishtoazurecontainer

Before proceeding it’s important to understand the anatomy of how a Docker application works in a production environment:

  • A container registry is created that the Docker image is published to
  • The App Service site is created that downloads the image from the container registry and runs it
  • At any time, you can push a new image to the container registry which will then result in the copy running in App Service being updated.

With that understanding, let’s proceed to publishing our application to Azure.  The next thing we’ll see is the Azure provisioning dialog.  There are a couple of things to note about using this dialog in the RC preview:

  • If you are using an existing Resource Group, it must be in the same region as the App Service Plan you are creating
  • If you are creating a new Resource Group, you must set the Container Registry and the App Service plan to be in the same region (e.g. both must be in “West US”)
  • The VM size of the App Service Plan must be “S1” or larger

10-publishtoazurecontainerpart2When we click “OK” it will take about a minute, and then we’ll return to the “Publish” page, where we’ll see a summary of the publish profile we just created.

11-publishtoazurecontainerpart3Now we click “Publish” and it will take about another minute during which time you’ll see a Docker command prompt pop up

12-publishtoazurecontainerpart4When the application is ready, your browser will open to the site, and we can see that we’re running on Linux in Azure!

13-runninginazurecontainer

Setting up continuous build integration and delivery to the Azure Container Service

Now let’s setup continuous build delivery to Microsoft Azure Container Service. To do this, I’ll right click on the project and choose “Configure Continuous Delivery…”.  This will bring up a continuous delivery configuration dialog.

Configure Continuous Delivery

On the Configure Continuous Delivery dialog, select a user account with a valid Azure subscription as well as An Azure subscription with a valid Container registry and a DC/OC orchestrator Azure Container Service.

Configure Continuous Delivery

When done, click OK to start the setup process. A dialog will pop-up to explain that the setup process started.

Configuration Started

As the continuous build delivery setup can take several minutes to complete, you may consult the ‘Continuous Delivery Tools’ output tool window later to inspect the progress.

Upon successful completion of the setup, the output window will display the configuration details used to create the build and release definitions on VSTS to enable continuous build delivery for the project to the Azure Container Service.

Setup Complete

Conclusion

Please download Visual Studio 2017 today, and give our .NET Core and Docker experience a try.  It’s worth noting that this is a preview of the experience, so please help us make it great by providing feedback in the comments below.

Client-side debugging of ASP.NET projects in Google Chrome

$
0
0

Visual Studio 2017 RC now supports client-side debugging of both JavaScript and TypeScript in Google Chrome.

For years, it has been possible to debug both the backend .NET code and the client-side JavaScript code running in Internet Explorer at the same time. Unfortunately, the capability was limited solely to Internet Explorer.

In Visual Studio 2017 RC that changes. You can now debug both JavaScript and TypeScript directly in Visual Studio when using Google Chrome as your browser of choice. All you should do is to select Chrome as your browser in Visual Studio and hit F5 to debug.

If you’re interested in giving us feedback on future features and ideas before we ship them, join our community.

browser-selector

The first thing you’ll notice when launching Chrome by hitting F5 in Visual Studio is a page that says, “Please wait while we attach…”.

debugger-attach

What happens is that Visual Studio is attaching to Chrome using the remote debugging protocol and then redirects to the ASP.NET project URL (something like http://localhost:12345) after it attaches. After the attach is complete, the “Please wait while we attach…” message remains visible while the ASP.NET site starts up where normally you’d see a blank browser during this time.

Once the debugger is attached, script debugging is now enabled for all JavaScript files in the project as well as all TypeScript files if there is source map information available. Here’s a screen shot of a breakpoint being hit in a TypeScript file.

breakpoint-hit

For TypeScript debugging you need to instruct the compiler to produce a .map file. You can do that by placing a tsconfig.json file in the root of your project and specify the a few properties, like so:

{
  "compileOnSave": true,
  "compilerOptions": {
    "sourceMap": true
  }
}

There are developers who prefer to use Chrome’s or IE’s own dev tools to do client-side debugging and that is great. There will be a setting in Visual Studio that allows you to disable client-side debugging in both IE and Chrome, but unfortunately that didn’t make it in to the release candidate.

We hope you’ll enjoy this feature and we would love to hear your feedback in the comments section below, or via Twitter.

Download Visual Studio 2017 RC

Notes from the ASP.NET Community Standup – November 1, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 11/01/2016

Community Links

Puma Scan is a software security Visual Studio analyzer extension that is built on top of Roslyn.

Plug ASP.NET Core Middleware in MVC Filters Pipeline 

Building An API with NancyFX 2.0 + Dapper

.NET Standard based Windows Service support for .NET 

Accessing the HTTP Context on ASP.NET Core

Accessing services when configuring MvcOptions in ASP.NET Core 

Adding Cache-Control headers to Static Files in ASP.NET Core 

Building .Net Core On Travis CI

Umbraco CLI running on ASP.NET Core

Testing SSL in ASP.NET Core

ASP.NET API Versioning 

Creating a new .NET Core web application, what are your options?

Using MongoDB .NET Driver with .NET Core WebAPI

ASP.NET Core project targeting .NET 4.5.1 running on Raspberry Pi

Free ASP.NET Core 1.0 Training on Microsoft Virtual Academy

Using dotnet watch test for continuous testing with .NET Core and XUnit.net 

Azure Log Analytics ASP.NET Core Logging extension

Bearer Token Authentication in ASP.NET Core

ASP.NET Core Module
Removal of dnvm scripts for the aspnet/home repo

Demos

ASP.NET Core 1.1  Preview 1  added a couple of  new features around Azure integration, performance and more. In this Community Standup Damian walks us through how he easily upgraded live.asp.net site to ASP.NET Core 1.1, as well as, how to add View Compilation and  Azure App Services.

Upgrading Existing Projects

Before you start using any of the ASP.NET Core 1.1  Preview 1 features makes sued to update the following:

  • Install .NET Core 1.1 Preview 1 SDK
  • Upgrade existing project  from .NET Core 1.0 to .NET Core 1.1 Preview 1. Make sure to also updated your ASP.NET Core packages to their latest versions 1.1.0-preview1.
  • Update the netcoreapp1.0 target framework to netcoreapp1.1.

View compilation

Damian went over how he added View compilation to live.asp.net. Typically your razor pages get complied the first time someone visits the site. The advantage of  View compilation is, you can now precompile the razor views that your application references and deploy them. This features allow for faster startup times in your application since your views are ready to go.

To start using  precompiled views in your application follow the following steps.

  • Add  View compilation package
            "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design": {
                "version": "1.1.0-preview4-final",
                "type": "build"
             }
  • Add View compilation tool
             "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools": {
                 "version": "1.1.0-preview4-final"
               }
  • Include the post publish script to evoke pre-compilation

Now, that live.asp.net is configured to use view compilation, it will pre-compile the razor views. Once you’ve published your application, you will notice that your PublishOutput folders no longer contains a view folder. Instead, you will see appname.PrecompileViews.dll.

Azure App Service logging Provider

Damian also configured  live.asp.net to use Azure App services.  By adding  Microsoft.AspNetCore.AzureAppServicesIntegration package , and calling the UseAzureAppservices method in Program.cs Diagnostic logs are now turned on in Azure.(see image below).

appservices-logging-150
With Application Logging turned on, you can choose the log level you want and see them in Kudu console, or Visual Studio. (see image below)

appservices-viewlogginginkuduconsole-150

Application Logs in Kudu

This week Damian went over how to use some of the new features in ASP.NET Core 1.1  Preview 1. For more details on ASP.NET Core 1.1 please check out the announcement from last month. Thanks for watching.

MVP Hackathon 2016: Cool Projects from Microsoft MVPs

$
0
0

Last week was the annual MVP Summit on Microsoft’s Redmond campus.  We laughed, we cried, we shared stories around the campfire, and we even made s’mores.  Ok, I’m stretching it a bit about the last part, but we had a good time introducing the MVPs to some of the cool technologies you saw at Connect() yesterday, and some that are still in the works for 2017.  As part of the MVP Summit event, we hosted a hackathon to explore some of the new features and allow attendees to write code along with Microsoft engineers and publish that content as an open source project.

We shared the details of some of these projects with the supervising program managers covering Visual Studio, ASP.NET, and the .NET framework.  Those folks were impressed with the work that was accomplished, and now we want to share these accomplishments with you.  This is what a quick day’s worth of work can accomplish when working with your friends.

MVP Hackers at the end of the Hackathon

MVP Hackers at the end of the Hackathon

  • Shaun Luttin wrote a console application in F# that plays a card trick.  Source code at:  https://github.com/shaunluttin/magical-mathematics
  • Rainer Stropek created a docker image to fully automate the deployment and running of a Minecraft server with bindings to allow interactions with the server using .NET Core.  Rainer summarized his experience and the docker image on his blog
  • Tanaka Takayoshi wrote an extension command called “add” for the dotnet command-line interface.  The Add command helps format new classes properly with namespace and initial class declaration code when you are working outside of Visual Studio. Tanaka’s project is on GitHub.
  • Tomáš Herceg wrote an extension for Visual Studio 2017 that supports development with the DotVVM framework for ASP.NET.  DotVVM is a front-end framework that dramatically simplifies the amount of code you need to write in order to create useful web UI experiences.  His project can be found on GitHub at: https://github.com/riganti/dotvvm   See the animated gif below for a sample of how DotVVM can be coded in Visual Studio 2017:

    DotVVM Intellisense in action

    DotVVM Intellisense in action

  • The ASP.NET Monsters wrote Pugzor, a drop-in replacement for the Razor view engine using the “Pug” JavaScript library as the parser and renderer. It can be added side-by-side with Razor in your project and enabled with one line of code. If you have Pug templates (previously called Jade) these now work as-are inside ASP.NET Core MVC. The ASP.NET Monsters are: Simon Timms, David Paquette and James Chambers

    Pugzor

    Pugzor

  • Alex Sorkoletov wrote an addin for Xamarin Studio that helps to clean up unused using statements and sort them alphabetically on every save.  The project can be found at: https://github.com/alexsorokoletov/XamarinStudio.SortRemoveUsings
  • Remo Jansen put together an extension for Visual Studio Code to display class diagrams for TypeScript.  The extension is in alpha, but looks very promising on his GitHub project page.

    Visual Studio Code - TypeScript UML Generator

    Visual Studio Code – TypeScript UML Generator

  • Giancarlo Lelli put together an extension to help deploy front-end customizations for Dynamics 365 directly from Visual Studio.  It uses the TFS Client API to detect any changes in you workspace and check in everything on your behalf. It is able to handle conflicts that prevents you to overwrite the work of other colleagues. The extension keeps the same folder structure you have in your solution explorer inside the CRM. It also supports adding the auto add of new web resources to a specific CRM solution. This extension uses the VS output window to provide feedback during the whole publish process.  The project can be found on its GitHub page.

    Publish to Dynamics

    Publish to Dynamics

  • Simone Chiaretta wrote an extension for the dotnet command-line tool to manage the properties in .NET Core projects based on MSBuild. It allows setting and removing the version number, the supported runtimes and the target framework (and more properties are being added soon). And it also lists all the properties in the project file.  You can extend your .NET CLI with his NuGet package or grab the source code from GitHub.  He’s written a blog post with more details as well.

    The dotnet prop command

    The dotnet prop command

  • Nico Vermeir wrote an amazing little extension that enables the Surface Dial to help run the Visual Studio debugger.  He wrote a blog post about it and published his source code on GitHub.
  • David Gardiner wrote a Roslyn Analyzer that provides tips and best practice recommendations when authoring extensions for Visual Studio.  Source code is on GitHub.

    VSIX Analyzers

    VSIX Analyzers

  • Cecilia Wirén wrote an extension for Visual Studio that allows you to add a folder on disk as a solution folder, preserving all files in the folder.  Cecilia’s code can be found on GitHub

    Add as Solution Folder

    Add Folder as Solution Folder

  • Terje Sandstrom updated the NUnit 3 adapter to support Visual Studio 2017.

    NUnit Results in Visual Studio 2017

    NUnit Results in Visual Studio 2017

     

  • Ben Adams made the Kestrel web server for ASP.NET Core 8% faster while sitting in with some of the ASP.NET Core folks.

Summary

We had an amazing time working together, pushing each other to develop and build more cool things that could be used with Visual Studio 2015, 2017, Code, and Xamarin Studio.  Stepping away from the event, and reading about these cool projects inspires me to write more code, and I hope it does the same for you.  Would you be interested in participating in a hackathon with MVPs or Microsoft staff?  Let us know in the comments below

 

Notes from the ASP.NET Community Standup – November 22, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

This week the team hosted the standup on Aerial Spaces.  Every week’s episode is published on YouTube for later reference. The team answers your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 11/22/2016

Community Links

Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM

Announcing .NET Core 1.1

App Service on Linux now supports Containers and ASP.NET Core

ASP.NET Core Framework Benchmarks Round 13

MVP Hackathon 2016: Cool Projects from Microsoft MVPs

Damian Edwards live coding live.asp.net

EDI.Net Serializer/Deserializer

ASP.NET Core’s URL Rewrite Middleware behind a load balancer

ASP.NET Core  Workshops and Code Labs

Unexpected Behavior in LanguageViewLocationExpander

Project.json to CSproj

OrchardCMS Roadmap

ASP.NET Core and the Enterprise Part 3: Middleware

Using .NET Core Configuration with legacy projects

High-Performance Data Pipelines

.NET Core versioning

Not your granddad’s .NET – Pipes Part 1

Accomplishments

Tech Empower Benchmark

Tech Empower Benchmark Round 13 came out and ASP.NET Core is Top 10 receiving  1,822,366 requests per second on ASP.NET Core in Round 13.  Read more

capture

Question and Answers

Question:  Will there be MVC 4 project support in Visual Studio 2017?

— Removed in RC but should be coming back in the next release.

Question: What should I grab ASP.NET Core 1.1 runtime or SDK? / What’s the difference between the .NET Core SDK and runtime?

— In short, if you a developer you want to install the .NET Core SDK. If you are server administrator you may only want install the runtime.

Question: Will .csproj tooling be finalized with Visual Studio 2017 RTM?

— Yes, that is the current plan in place. There are couple of know issues for ASP.NET Core support in Visual Studio 2017; we have listed the workarounds on our GitHub repo.

Question: How for along is the basic pipeline API?

— Currently, being tested by some folks at Stack Overflow.  If you would like to get involved tweet David Fowler.

Question: When will URL based cultural localization be available?

— It’s available now.  With ASP.NET Core 1.1  Middleware as MVC filters.

In this example from the ASP.NET Core 1.1 announcement we used a route value based request culture provider to establish the current culture for the request using the localization middleware.

The team will be back on Tuesday the 29th of November to discuss the latest updates on ASP.NET Core.  See you then!


Visual Studio Tools for Azure Functions

$
0
0

Today we are pleased to announce a preview of tools for building Azure Functions for Visual Studio 2015. Azure Functions provide event-based serverless computing that make it easy to develop and scale your application, paying only for the resources your code consumes during execution. This preview offers the ability to create a function project in Visual Studio, add functions using any supported language, run them locally, and publish them to Azure. Additionally, C# functions support both local and remote debugging.

In this post, I’ll walk you through using the tools by creating a C# function, covering some important concepts along the way. Then, once we’ve seen the tools in action I’ll cover some known limitations we currently have.

Also, please take a minute and let us know who you are so we can follow up and see how the tools are working.

Getting Started

Before we dive in, there are a few things to note:

For our sample function, we’ll create a C# function that is triggered when a message is published into a storage Queue, reverses it, and stores both the original and reversed strings in Table storage.

  • To create a function, go to:
  • File -> New Project
  • Then select the “Cloud” node under the “Visual C#” section and choose the “Azure Functions (Preview) project type
    image
  • This will give us an empty function project. There are a few things to note about the structure of the project:
  • For the purposes of this blog post, we’ll add an entry that speeds up the queue polling interval from the default of once a minute to once a second by setting the “maxPollingInterval” in the host.json (value is in ms)
    image
  • Next, we’ll add a function to the project, by right clicking on the project in Solution Explorer, choose “Add” and then “New Azure Function”
    image
  • This will bring up the New Azure Function dialog which enables us to create a function using any language supported by Azure Functions
    image
  • For the purposes of this post we’ll create a “QueueTrigger – C#” function, fill in the “Queue name” field, “Storage account connection” (this is the name of the key for the setting we’ll store in “appsettings.json”), and the “Name” of our function
    image
  • This will create a new folder in the project with the name of our function with the following key files:
  • The last thing we need to do in order to hook up function to our storage Queue is provide the connecting string in the appsettings.json file (in this case by setting the value of “AzureWebJobsStorage”)
    image
  • Next we’ll edit the “function.json” file to add two bindings, one that gives us the ability to read from the table we’ll be pushing to, and another that gives us the ability to write entries to the table
    image
  • Finally, we’ll write our function logic in the run.csx file
    image
  • Running the function locally works like any other project in Visual Studio, Ctrl + F5 starts it without debugging, and F5 (or the Start/Play button on the toolbar) launches it with debugging. Note: Debugging currently only works for C# functions. Let’s hit F5 to debug the function.
  • The first time we run the function, we’ll be prompted to install the Azure Functions CLI (command line) tools. Click “Yes” and wait for them to install, our function app is now running locally. We’ll see a command prompt with some messages from the Azure Functions CLI pop up, if there were any compilation problems, this is where the messages would appear since functions are dynamically compiled by the CLI tools at runtime.
    image
  • We now need to manually trigger our function by pushing a message into the queue with Azure Storage Explorer. This will cause the function to execute and hit our breakpoint in Visual Studio.
    image

Publishing to Azure

  • Now that we’ve tested the function locally, we’re ready to publish our function to Azure. To do this right click on the project and choose “Publish…”, then choose “Microsoft Azure App Service” as the publish target
    image
  • Next, you can either pick an existing app, or create a new one. We’ll create a new one by clicking the “New…” button on the right side of the dialog
  • This will pop up the provisioning dialog that lets us choose or setup the Azure environment (we can customize the names or choose existing assets). These are:
    • Function App Name: the name of the function app, this must be unique
    • Subscription: the Azure subscription to use
    • Resource Group: what resource group the to add the Function App to
    • App Service Plan: What app service plan you want to run the function on. For complete information read about hosting plans, but it’s important to note that if you choose an existing App Service plan you will need to set the plan to “always on” or your functions won’t always trigger (Visual Studio automatically sets this if you create the plan from Visual Studio)
  • Now we’re ready to provision (create) all of the assets in Azure. Note: that the “Validate Connection” button does not work in this preview for Azure Functions 
    image
  • Once provisioning is complete, click “Publish” to publish the Function to Azure. We now have a publish profile which means all future publishes will skip the provisioning steps
    image
    Note: If you publish to a Consumption plan, there is currently a bug where new triggers that you define (other than HTTP) will not be registered in Azure, which can cause your functions not to trigger correctly. To work around this, open your Function App in the Azure portal and click the “Refresh” button on the lower left to fix the trigger registration. This bug with publish will be fixed on the Azure side soon.
  • To verify our function is working correctly in Azure, we’ll click the “Logs” button on the function’s page, and then push a message into the Queue using Storage Explorer again. We should see a message that the function successfully processed the message
    image
  • The last thing to note, is that it is possible to remote debug a C# function running in Azure from Visual Studio. To do this:
    • Open Cloud Explorer
    • Browse to the Function App
    • Right click and choose “Attach Debugger”
      image

Known Limitations

As previously mentioned, this is the first preview of these tools, and we have several known limitations with them. They are as follow:

  • IntelliSense: IntelliSense support is limited, and available only for C#, and JavaScript by default. F#, Python, and PowerShell support is available if you have installed those optional components. It is also important to note that C# and F# IntelliSense is limited at this point to classes and methods defined in the same .csx/.fsx file and a few system namespaces.
  • Cannot add new files using “Add New Item”: Adding new files to your function (e.g. .csx or .json files) is not available through “Add New Item”. The workaround is to add them using file explorer, the Add New File extension, or another tool such as Visual Studio Code.
  • Functions published from Visual Studio are not properly registered in Azure: This is caused by a bug in the Azure service for Functions running on a Consumption plan. The workaround is to open the Function App’s page in the Azure portal and click the “Refresh” button in the bottom left. This will register the functions with Azure.
  • Function bindings generate incorrectly when creating a C# Image Resize function: The settings for the binding “Azure Storage Blob out (imageSmall)” are overridden by the settings for the binding “Azure Storage Blob out (imageMedium)” in the generated function.json. The workaround is to go to the generated function.json and manually edit the “imageSmall” binding.

Conclusion

Please download and try out this preview of Visual Studio Tools for Azure Functions and let us know who you are so we can follow up and see how they are working. Additionally, please report any issues you encounter on our GitHub repo (include “Visual Studio” in the issue title) and provide any comments or questions you have below, or via Twitter.

Introducing the ASP.Net Async OutputCache Module

$
0
0

OutputCacheModule is ASP.NET’s default handler for storing the generated output of pages, controls, and HTTP responses.  This content can then be reused when appropriate to improve performance. Prior to the .NET Framework 4.6.2, the OutputCache Module did not support async read/write to the storage.

Starting with the .NET Framework 4.6.2 release, we introduced a new OutputCache async provider abstract class named OutputCacheProviderAsync, which defines interfaces for an async OutputCache provider to enable asynchronous access to a shared OutputCache. The Async OutputCache Module that supports those interfaces is released as a NuGet package, which you can install to any 4.6.2+ web applications.

Benefits of the Async OutputCache Module

It’s all about the scalability. The cloud makes it really easy to scale-out computing resources to serve the large spikes in service requests to an application. When you consider the scalability of an OutputCache, you can not use an in-memory provider because the in-memory provider does not allow you to share data across multiple web servers.

You will need to store OutputCache data in another storage medium such as Microsoft Azure SQL Database, NoSQL, or Redis Cache.  Currently, the OutputCache interaction with these storage mediums is restricted to run synchronously. With this update, the new async OutputCache module enables you to read and write data from these storage providers asynchronously. Async I/O operations help release threads quicker than synchronous I/O operations, which allows ASP.NET to handle other requests. If you are interested in more details about programming asynchronously and the use of the async and await keywords, you can read Stephen Cleary’s excellent article on Async Programming : Introduction to Async/Await on ASP.NET.

How to use the Async OutputCache Module

  1. Target your application to 4.6.2+.

The OutputCacheProviderAsync interface was introduced in .NET Framework 4.6.2, therefore you need to target your application to .NET Framework 4.6.2 or above in order to use the Async OutputCache Module. Download the .NET Framework 4.6.2 Developer Pack if you do not have it installed yet and update your application’s web.config targetFrameworks attributes as demonstrated below:

<system.web>
  <compilation debug="true" targetFramework="4.6.2"/>
  <httpRuntime targetFramework="4.6.2"/>
</system.web>
  1. Add the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync NuGet package.

Use the NuGet package manager to install the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync package.  This will add a reference to the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync.dll and add the following configuration into the web.config file.

<system.webServer>
  <modules>
    <remove name="OutputCache"/>
    <add name="OutputCache" type="Microsoft.AspNet.OutputCache.OutputCacheModuleAsync, Microsoft.AspNet.OutputCache.OutputCacheModuleAsync" preCondition="integratedMode"/>
  </modules>
</system.webServer>

Now your applications will start using Async OutputCache Module. If no outputcacheprovider is specified in web.config, the module will use a default synchronous in-memory provider, with that you won’t get the async benefits. We have not yet released an Async OutputCache provider, but plan to in the near future. Let’s take a look at how you can implement an async OutputCache Provider of your own.

How to implement an async OutputCache Provider

An async OutputCache Provider just needs to implement the OutputCacheProviderAsync interface.

More specifically, the async provider should implement the following 8 APIs.

Add(String, Object, DateTime) Inserts the specified entry into the output cache. (Inherited from OutputCacheProvider.)
AddAsync(String, Object, DateTime) Asynchronously inserts the specified entry into the output cache.
Get(String) Returns a reference to the specified entry in the output cache.(Inherited from OutputCacheProvider.)
GetAsync(String) Asynchronously returns a reference to the specified entry in the output cache.
Remove(String) Removes the specified entry from the output cache.(Inherited from OutputCacheProvider.)
RemoveAsync(String) Asynchronously removes the specified entry from the output cache.
Set(String, Object, DateTime) Inserts the specified entry into the output cache, overwriting the entry if it is already cached.(Inherited from OutputCacheProvider.)
SetAsync(String, Object, DateTime) Asynchronously Inserts the specified entry into the output cache, overwriting the entry if it is already cached.

If you want your provider to support Cache Dependency and callback functionality, you will need to implement the interface ICacheDependencyHandler, which is defined within the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync.dll.  You can add this reference by installing the same NuGet package referenced in our web project.

The current version of the Async OutputCache Module does not support Registry Key nor SQL dependencies. Depending on the feedback we hear, we may consider adding them in the future.

Once you have finished implementing your provider class, you can use it in a web application by adding a reference to your library and adding the following configurations into the web.config file:

<system.web>
  <caching>
    <outputCache defaultProvider="CustomOutputCacheProvider">
    <providers>
      <add name="CustomOutputCacheProvider" type="CustomOutputCacheProvider.CustomOutputCacheProvider, CustomOutputCacheProvider" />
    </providers>
    </outputCache>
  </caching>
</system.web>

That should work! If you need some help to get started, here is an example of an in-memory Async OutputCache Provider as a proof of concept. You can see that it has implemented all the APIs needed and is ready to plug in and use.

 

Summary

To wrap up things we have talked about: we have released an async version of the OutputCache Module which allows ASP.NET to take advantage of modern async techniques to help scale your OutputCache. With this new interface, you can now write your own async version of OutputCache providers easily. We encourage you to try this module and extend the your current OutputCache provider to any storage medium that supports async interactions. We also encourage you to share the providers you wrote on NuGet.org and let us know about them in the comments area below. Good luck and happy coding! If you have any questions or suggestions, please feel free to reach out to us by leaving your comments here.

Notes from the ASP.NET Community Standup – November 29, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway(Jon’s in Russia this week)and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 11/29/2016

Quick Note: Jon’s in Russia this week so, we don’t have any community links this week.  

Question and Answers

This week  Damian and Scott jumped right into question. Damian had a question on Hanselman’s post “Publishing ASP.NET Core 1.1 applications to Azure using git deploy“.

Damian’s Question: “How did you create a project without a global.json? …. In Visual Studio today the project always includes a global.json… did you create it on a Mac?

— Scott: “dotnet new in the command line.”

Damian went on to explain the difference between a new application created using the dotnet cli and one created in Visual Studio. When a .NET Core project is created using the dotnet new templates, it does not come with solution level files like global.json.

dotnet new template

dotnet new template files

Visual Studio 2015 template

Visual Studio 2015 template with global.json

Today global.json is how you set the version of the .NET Core SDK needed for your application. Remember that unless you specify the version SDK, .NET Core will use the latest one on your machine and your app will not work. If you find yourself in a similar scenario to the one mentioned this how you fix it.

Find out what version of SDK you have locally.

dotnetversion

Add global.json to your project and include the appropriate version of the SDK.

Check out Hanselman’s post “Publishing ASP.NET Core 1.1 applications to Azure using git deploy” for more information on the above.

Question: What are we doing to simplify the Docker versioning numbers?

— Now, that we have release 1.0 and 1.1 we can make a fair assessment of how well the versioning strategy is working. Based on those experiences we are going to make some adjustments.

Question: Why isn’t ASP.NET Core 1.1 backward compatible? I have a lot of 1.0 libraries.

— The intent is that with minor releases like from  1.0 to 1.1 of  any package or component shouldn’t  break stuff. However, the support matrix for the .NET Core is you can’t mix current components with LTS components. For example you can use ASP.NET Core hosting 1.0 with MVC for ASP.NET Core 1.1

See you at our next community standup!

 

New Updates to Web Tools in Visual Studio 2017 RC

$
0
0

Update 12/15: There was a bug in the Visual Studio 2017 installer that shipped between 12/12 and 12/14, that if you upgraded a prior RC installation it uninstalled IIS Express, Web Deploy, and LocalDB.  The fix is to manually re-install IIS Express and Web Deploy.  We shipped an updated installer on 12/15 that fixed this issue, so if you upgrade 12/15 or later you will not be affected. For details see our known issues page.

Today we announced an update to Visual Studio 2017 RC that includes a variety of improvements for both ASP.NET and ASP.NET Core projects. If you’ve already installed Visual Studio 2017 RC then these updates will be pushed to you automatically. Otherwise, simply install Visual Studio 2017 RC and you will get the latest updates. Below is a summary of the improvements to the Web tools in this release:

  • The ability to turn off script debugging for Chrome and Internet Explorer if you prefer to use the in-browser tools. To do this, go to Debug -> Options, and uncheck “Enable JavaScript debugging for ASP.NET (Chrome and IE)”.
    chrome-script-debugging
  • Bower packages now restore correctly without any manual workarounds required.
  • General stability improvements for ASP.NET Core applications, including:
    • Usability and stability improvements for creating ASP.NET Core apps with Docker containers. Most notably we’ve fixed the issue that when provisioning a app in Azure App Service, new resource groups no longer need to be created in the same region as the App Service plan.
    • Entity Framework Core commands such as Add-Migration, and Update-Database can be invoked from the NuGet Package Manager Console.
    • ASP.NET Core applications now work with Windows Authentication.
    • Lots of improvements to the .NET Core tooling. For complete details see the .NET team blog post.

Thanks for trying out this latest update of Visual Studio 2017! For an up to date list of known issues see our GitHub page, and keep the feedback coming by reporting any issues using the built-in feedback tools.

Announcing Microsoft ASP.NET WebHooks V1

$
0
0

We are very happy to announce ASP.NET WebHooks V1 making it easy to both send and receive WebHooks with ASP.NET.

WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more — the possibilities are endless! When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Because of their simplicity, WebHooks are already exposed by most popular services and Web APIs. To help managing WebHooks, Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

The two parts can be used together or apart depending on your scenario. If you only need to receive WebHooks from other services, then you can use just the receiver part; if you only want to expose WebHooks for others to consume, then you can do just that.

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, is available as Open Source on GitHub, and as Nuget packages.

A port to the ASP.NET Core is being planned so please stay tuned!

Receiving WebHooks

Dealing with WebHooks depends on who the sender is. Sometimes there are additional steps registering a WebHook verifying that the subscriber is really listening. Often the security model varies quite a bit. Some WebHooks provide a push-to-pull model where the HTTP POST request only contains a reference to the event information which is then to be retrieved independently.

The purpose of Microsoft ASP.NET WebHooks is to make it both simpler and more consistent to wire up your API without spending a lot of time figuring out how to handle any WebHook variant:

WebHookReceivers

A WebHook handler is where you process the incoming WebHook. Here is a sample handler illustrating the basic model. No registration is necessary – it will automatically get picked up and called:

public class MyHandler : WebHookHandler
{
// The ExecuteAsync method is where to process the WebHook data regardless of receiver
public override Task ExecuteAsync(string receiver, WebHookHandlerContext context)
{
// Get the event type
string action = context.Actions.First();

// Extract the WebHook data as JSON or any other type as you wish
JObject data = context.GetDataOrDefault<JObject>();

return Task.FromResult(true);
}
}

Finally, we want to ensure that we only receive HTTP requests from the intended party. Most WebHook providers use a shared secret which is created as part of subscribing for events. The receiver uses this shared secret to validate that the request comes from the intended party. It can be provided by setting an application setting in the Web.config file, or better yet, configured through the Azure portal or even retrieved from Azure Key Vault.

For more information about receiving WebHooks and lots of samples, please see these resources:

Sending WebHooks

Sending WebHooks is slightly more involved in that there are more things to keep track of. To support other APIs registering for WebHooks from your ASP.NET application, we need to provide support for:

  • Exposing which events subscribers can subscribe to, for example Item Created and Item Deleted;
  • Managing subscribers and their registered WebHooks which includes persisting them so that they don’t disappear;
  • Handling per-user events in the system and determine which WebHooks should get fired so that WebHooks go to the correct receivers. For example, if user A caused an Item Created event to fire then determine which WebHooks registered by user A should be sent. We don’t want events for user A to be sent to user B
  • Sending WebHooks to receivers with matching WebHook registrations.

As described in the blog Sending WebHooks with ASP.NET WebHooks Preview, the basic model for sending WebHooks works as illustrated in this diagram:

WebHooksSender

Here we have a regular Web site (for example deployed in Azure) with support for registering WebHooks. WebHooks are typically triggered as a result of incoming HTTP requests through an MVC controller or a WebAPI controller. The orange blocks are the core abstractions provided by ASP.NET WebHooks:

  1. IWebHookStore: An abstraction for storing WebHook registrations persistently. Out of the box we provide support for Azure Table Storage and SQL but the list is open-ended.
  2. IWebHookManager: An abstraction for determining which WebHooks should be sent as a result of an event notification being generated. The manager can match event notifications with registered WebHooks as well as applying filters.
  3. IWebHookSender: An abstraction for sending WebHooks determining the retry policy and error handling as well as the actual shape of the WebHook HTTP requests. Out of the box we provide support for immediate transmission of WebHooks as well as a queuing model which can be used for scaling up and out, see the blog New Year Updates to ASP.NET WebHooks Preview for details.

The registration process can happen through any number of mechanisms as well. Out of the box we support registering WebHooks through a REST API but you can also build registration support as an MVC controller or anything else you like.

It’s also possible to generate WebHooks from inside a WebJob. This enables you to send WebHooks not just as a result of incoming HTTP requests but also as a result of messages being sent on a queue, a blob being created, or anything else that can trigger a WebJob:

WebHooksWebJobsSender

The following resources provide details about building support for sending WebHooks as well as samples:

Thanks to all the feedback and comments throughout the development process, it is very much appreciated!

Have fun!

Henrik

Notes from the ASP.NET Community Standup –December 13, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 12/14/2016

Community Links

Rethinking email confirmation

Managing Cookie Lifetime with ASP.NET Core OAuth 2.0 providers

Build a REST API for your Mobile Apps with ASP.NET Core

IdentityServer4 and ASP.NET Core 1.1

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Making Application Insight Fast and Secure

Simple SQL Localization NuGet package

Building Application Insights Logging Provider for ASP.NET Core

Generic Repository Pattern In ASP.NET Core

Integration Testing with Entity Framework Core and SQL Server

Bare metal APIs with ASP.NET Core MVC

Accessing HttpContext outside of framework components in ASP.NET Core

Optimize expression compilation memory usage

ASP.NET Core Response Optimization

Angular 2 and ASP.NET Core MVC

Dockerizing a Real World ASP.NET Core Application

Convert ASP.NET Web Servers to Docker with Image2Docker

Sharing code across .NET platforms with .NET Standard

Multiple Versions of .NET Core Runtimes and SDK Tools SxS Survive Guide

Migration to ASP.NET Core: Considerations and Strategies

Updating Visual Studio 2017 RC – .NET Core Tooling improvement

Accomplishments

On December 12th, we announced updates to the .NET Core tooling for Visual Studio 2017 RC. This update came with enhancements and bug fixes to the earlier release of VS 2017 .NET Core tooling. Some areas addressed include:

csproj file simplification: .NET Core project files now use an even more simplified syntax, making them easier to read.

Previous

Simplified

CLI commands added: New commands added for adding and removing  project to project references.

Overall quality improved: Bug fixes in xproj to csproj migration, project to project references, NuGet, MSBuild and ASP.NET Core with Docker.

For more details on the .NET Core Tooling improvements please read the announcement here.

Questions and Answers

Question:  With the new VS 2017 .NET Core tooling updates  I get csproj file when I create a new application. Will I ever want to use project.json again?

— The logic is when you go into a new folder and type dotnet, it needs to find an SDK. It will by default use the latest version of the dotnet SDK available. However, by using global.json you can specify previous versions of the SDK you would like to use.

Question: Is global.json still the router for the dotnet SDK?  checkout this post for reference

— In the current build it is but, will be replaced. The intent is we will still support side-by-side SDKs with the ability to switch between them.

See you at our next community standup!

 

 

Notes from the ASP.NET Community Standup –January 3, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 1/3/2017

Community Links

Custom Tag Helper: Toggling Visibility On Existing HTML elements

Introducing a new Markdown View Engine for ASP.NET Core

Building production ready Angular apps with Visual Studio and ASP.NET Core

Cooking with ASP.NET Core and Angular 2

Angular2 CLI with ASP.NET Core application – tutorial

A Year of Open Source (2016)

Bootstrap Flexbox Navbar

React App +  ASP.NET Core

Azure AD as a Identity Provider in ASP.NET Core application

IdentityServer4.1.0.0

Use the mssql extension for Visual Studio Code

How to enable gZip compression in ASP.NET Core

Adding static file caching to live.asp.net

Automating Deployment Of ASP.NET Core To Azure App Service From Linux

Orchestrating multi service asp.net core application using docker-compose

Creating a WebSockets middleware for ASP .NET Core

Fluent URL builder and testable HTTP for .NET

GitHub Issue: app_offline.htm is case sensitive             

Questions and Answers

Question: When will the list of .NET Standard 2.0 APIs be baked/ready?

— This all available on the .NET Standard GitHub repo. For more information please checkout .NET blog and .NET Standard GitHub repo.

Question: Is Visual Studio 2017 still in RC?

—  Yes, it is. The team does continue to push updates to Visual 2017 RC and you will notice a change in the build number but, this is still all RC.

Question: How do I run an ASP.NET Core 1.1 application on update 3?

— Currently, you have to update the project manually. Visual Studio does not contain templates for ASP.NET Core 1.1. For more information on how to update your application to 1.1 please read ASP.NET Core 1.1 RTM announcement from November 2016.

Note: Visual Studio 2017 we will have both templates for  ASP.NET Core 1.0 and 1.1.

See you at our next community standup!

 


ASP.NET Core Authentication with IdentityServer4

$
0
0

This is a guest post by Mike Rousos

In my post on bearer token authentication in ASP.NET Core, I mentioned that there are a couple good third-party libraries for issuing JWT bearer tokens in .NET Core. In that post, I used OpenIddict to demonstrate how end-to-end token issuance can work in an ASP.NET Core application.

Since that post was published, I’ve had some requests to also show how a similar result can be achieved with the other third-party authentication library available for .NET Core: IdentityServer4. So, in this post, I’m revisiting the question of how to issue tokens in ASP.NET Core apps and, this time, I’ll use IdentityServer4 in the sample code.

As before, I think it’s worth mentioning that there are a lot of good options available for authentication in ASP.NET Core. Azure Active Directory Authentication is an easy way to get authentication as a service. If you would prefer to own the authentication process yourself, I’ve used and had success with both OpenIddict and IdentityServer4.

Bear in mind that both IdentityServer4 and OpenIddict are third-party libraries, so they are maintained and supported by community members – not by Microsoft.

The Scenario

As you may remember from last time, the goal of this scenario is to setup an authentication server which will allow users to sign in (via ASP.NET Core Identity) and provides a JWT bearer token that can be used to access protected resources from a SPA or mobile app.

In this scenario, all the components are owned by the same developer and trusted, so an OAuth 2.0 resource owner password flow is acceptable (and is used here because it’s simple to use in a demonstration). Be aware that this model exposes user credentials and access tokens (both of which are sensitive and could be used to impersonate a user) to the client. In more complex scenarios (especially if clients shouldn’t be trusted with user credentials or access tokens), OpenID Connect flows such as implicit or hybrid flows are preferable. IdentityServer4 and OpenIddict both support those scenarios. One of IdentityServer4’s maintainers (Dominick Baier) has a good blog post on when different flows should be used and IdentityServer4 quickstarts include a sample of using the implicit flow.

As we walk through this scenario, I’d also encourage you to check out IdentityServer4 documentation, as it gives more detail than I can fit into this (relatively) short post.

Getting Started

As before, my first step is to create a new ASP.NET Core web app from the ‘web application’ template, making sure to select “Individual User Accounts” authentication. This will create an app that uses ASP.NET Core Identity to manage users. An Entity Framework Core context will be auto-generated to manage identity storage. The connection string in appsettings.json points to the database where this data will be stored.

Because it’s interesting to understand how IdentityServer4 includes role and claim information in its tokens, I also seed the database with a couple roles and add a custom property (OfficeNumber) to my ApplicationUser type which can be used as a custom claim later.

These initial steps of setting up an ASP.NET Core application with identity are identical to what I did in my previously with OpenIddict, so I won’t go into great detail here. If you would like this setup explained further, please see my previous post.

Adding IdentityServer4

Now that our base ASP.NET Core application is up and running (with Identity services), we’re ready to add IdentityServer4 support.

  1. Add "IdentityServer4": "1.0.2" as a dependency in the app’s project.json file.
  2. Add IdentityServer4 to the HTTP request processing pipeline with a call to app.UseIdentityServer() in the app’s Startup.Configure method.
    1. It’s important that the UseIdentityServer() call come after registering ASP.NET Core Identity (app.UseIdentity()).
  3. Register IdentityServer4 services in the Startup.ConfigureServices method by calling app.AddIdentityServer().

We’ll also want to specify how IdentityServer4 should sign tokens. During development, an auto-generated certificate can be used to sign tokens by calling AddTemporarySigningCredential after the call to AddIdentityServer in Startup.ConfigureServices. Eventually, we’ll want to use a real cert for signing, though. We can sign with an x509 certificate by calling AddSigningCredential:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))

Note that you should not load the certificate from the app path in production; there are other AddSigningCredential overloads that can be used to load the certificate from the machine’s certificate store.

As mentioned in my previous post, it’s possible to create self-signed certificates for testing this out with the makecert and pvk2pfx command line tools (which should be on the path in a Visual Studio Developer Command prompt).

  • makecert -n "CN=AuthSample" -a sha256 -sv IdentityServer4Auth.pvk -r IdentityServer4Auth.cer
    • This will create a new self-signed test certificate with its public key in IdentityServer4Auth.cer and it’s private key in IdentityServer4Auth.pvk.
  • pvk2pfx -pvk IdentityServer4Auth.pvk -spc IdentityServer4Auth.cer -pfx IdentityServer4Auth.pfx
    • This will combine the pvk and cer files into a single pfx file containing both the public and private keys for the certificate. Our app will use the private key from the pfx to sign tokens. Make sure to protect this file. The .cer file can be shared with other services for the purpose of signature validation.

Token issuance from IdentityServer4 won’t yet be functional, but this is the skeleton of how IdentityServer4 is connected to our ASP.NET Core app.

Configuring IdentityServer4

Before IdentityServer4 will function, it must be configured. This configuration (which is done in ConfigureServices) allows us to specify how users are managed, what clients will be connecting, and what resources/scopes IdentityServer4 is protecting.

Specify protected resources

IdentityServer4 must know what scopes can be requested by users. These are defined as resources. IdentityServer4 has two kinds of resources:

  • API resources represent some protected data or functionality which a user might gain access to with an access token. An example of an API resource would be a web API (or set of APIs) that require authorization to call.
  • Identity resources represent information (claims) which are given to a client to identify a user. This could include their name, email address, or other claims. Identity information is returned in an ID token by OpenID Connect flows. In our simple sample, we’re using an OAuth 2.0 flow (the password resource flow), so we won’t be using identity resources.

The simplest way to specify resources is to use the AddInMemoryApiResources and AddInMemoryIdentityResources extension methods to pass a list of resources. In our sample, we do that by updating our services.AddIdentityServer() call to read as follows:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
  .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources()); // <- THE NEW LINE 

The MyApiResourceProvider.GetAllResources() method just returns an IEnumerable of ApiResources.

return new[]
{
    // Add a resource for some set of APIs that we may be protecting
    // Note that the constructor will automatically create an allowed scope with
    // name and claims equal to the resource's name and claims. If the resource
    // has different scopes/levels of access, the scopes property can be set to
    // list specific scopes included in this resource, instead.
    new ApiResource(
        "myAPIs",                                       // Api resource name
        "My API Set #1",                                // Display name
        new[] { JwtClaimTypes.Name, JwtClaimTypes.Role, "office" }) // Claims to be included in access token
};

If we also needed identity resources, they could be added with a similar call to AddInMemoryIdentityResources.

If more flexibility is needed in specifying resources, this can be accomplished by registering a custom IResourceStore with ASP.NET Core’s dependency injection. An IResourceStore allows for finer control over how resources are created, allowing a developer to read resource information from an external data source, for example. An IResourceStore which works with EntityFramework.Core (IdentityServer4.EntityFramework.Stores.ResourceStore) is available in the IdentityServer4.EntityFramework package.

Specify Clients

In addition to specifying protected resources, IdentityServer4 must be configured with a list of clients that will be requesting tokens. Like configuring resources, client configuration can be done with an extension method: AddInMemoryClients. Also like configuring resources, it’s possible to have more control over the client configuration by implementing our own IClientStore. In this sample, a simple call to AddInMemoryClients would suffice to configure clients, but I opted to use an IClientStore to demonstrate how easy it is to extend IdentityServer4 in this way. This would be a useful approach if, for example, client information was read from an external database. And, as with IResourceStore, you can find a ready-made IClientStore implementation for working with EntityFramework.Core in the IdentityServer4.EntityFramework package.

The IClientStore interface only has a single method (FindClientByIdAsync) which is used to look up clients given a client ID. The returned object (of type Client) contains, among other things, information about the client’s name, allowed grant types and scopes, token lifetimes, and the client secret (if it has one).

In my sample, I added the following IClientStore implementation which will yield a single client configured to use the resource owner password flow and our custom ‘myAPIs’ resource:

public class CustomClientStore : IClientStore
{
    public static IEnumerable<Client> AllClients { get; } = new[]
    {
        new Client
        {
            ClientId = "myClient",
            ClientName = "My Custom Client",
            AccessTokenLifetime = 60 * 60 * 24,
            AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
            RequireClientSecret = false,
            AllowedScopes =
            {
                "myAPIs"
            }
        }
    };

    public Task<Client> FindClientByIdAsync(string clientId)
    {
        return Task.FromResult(AllClients.FirstOrDefault(c => c.ClientId == clientId));
    }
}

I then registered the store with ASP.NET Core dependency injection (services.AddSingleton<IClientStore, CustomClientStore>() in Startup.ConfigureServices).

Connecting IdentityServer4 and ASP.NET Core Identity

To use ASP.NET Core Identity, we’ll be using the IdentityServer4.AspNetIdentity package. After adding this package to our project.json, the previous app.AddIdentityServer() call in Startup.ConfigureServices can be updated to look like this:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
  .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources())
  .AddAspNetIdentity<ApplicationUser>(); // <- THE NEW LINE

This will cause IdentityServer4 to get user profile information from our ASP.NET Core Identity context, and will automatically setup the necessary IResourceOwnerPasswordValidator for validating credentials. It will also configure IdentityServer4 to correctly extract JWT subject, user name, and role claims from ASP.NET Core Identity entities.

Putting it Together

With configuration done, IdentityServer4 should now work to serve tokens for the client we defined. The registering of IdentityServer4 services in Startup.ConfigureServices ends up looking like this all together:

// Add IdentityServer services
services.AddSingleton<IClientStore, CustomClientStore>();

services.AddIdentityServer()
    // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
    .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
    .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources())
    .AddAspNetIdentity<ApplicationUser>();

As before, a tool like Postman can be used to test out the app. The scope we specify in the request should be our custom API resource scope (‘myAPIs’).

Here is a sample token request:

POST /connect/token HTTP/1.1
Host: localhost:5000
Cache-Control: no-cache
Postman-Token: 958df72b-663c-5638-052a-aed41ba0dbd1
Content-Type: application/x-www-form-urlencoded

grant_type=password&username=Mike%40Contoso.com&password=MikesPassword1!&client_id=myClient&scope=myAPIs

The returned access token in our app’s response (which can be decoded using online utilities) looks like this:

{
 alg: "RS256",
 kid: "671A47CE65E10A98BB86EDCD5F9684E9D048FAE9",
 typ: "JWT",
 x5t: "ZxpHzmXhCpi7hu3NX5aE6dBI-uk"
}.
{
 nbf: 1481054282,
 exp: 1481140682,
 iss: "http://localhost:5000",
 aud: [
  "http://localhost:5000/resources",
  "myAPIs"
 ],
 client_id: "myClient",
 sub: "f6435683-f81c-4bd4-9c14-c7c09b236f4e",
 auth_time: 1481054281,
 idp: "local",
 name: "Mike@Contoso.com",
 role: "Administrator",
 office: "300",
 scope: [
  "myAPIs"
 ],
 amr: [
  "pwd"
 ]
}.
[signature]

You can read more details about how to understand the JWT fields in my previous post. Note that there are a few small differences between the tokens generated with OpenIddict and those generated with IdentityServer4.

  • IdentityServer4 includes the amr (authentication method references) field which lists authentication methods used.
  • IdentityServer4 always requires a client be specified in token requests, so it will always have a client_id in the response whereas OpenIddict treats the client as optional for some OAuth 2.0 flows.
  • IdentityServer4 does not include the optional iat field indicating when the access token was issued, but does include the auth_time field (defined by OpenID Connect as an optional field for OAuth 2.0 flows) which will have the same value.

In both cases, it’s possible to customize claims that are returned for given resources/scopes, so developers can make sure claims important to their scenarios are included.

Conclusion

Hopefully this walkthrough of a simple IdentityServer4 scenario is useful for understanding how that package can be used to enable authentication token issuance in ASP.NET Core. Please be sure to check out the IdentityServer4 docs for more complete documentation. As IdentityServer4 is not a Microsoft-owned library, support questions or issue reports should be directed to IdentityServer or the IdentityServer4 GitHub repository.

The scenario implemented here is no different from what was covered previously, but serves as an example of how different community-driven libraries can work to solve a given problem. One of the most exciting aspects of .NET Core is the tremendous community involvement we’ve seen in producing high-quality libraries to extend what can be done with .NET Core and ASP.NET Core. I think token-based authentication is a great example of that.

I have checked in sample code that shows the end product of the walk-through in this blog. Reviewing that repository may be helpful in clarifying any remaining questions.

Resources

Notes from the ASP.NET Community Standup –January 10, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

(Sorry for the delay on this one.)


ASP.NET Community Standup 1/10/2017

Community Links

Your First Angular 2, ASP.NET Core Project in Visual Studio Code – Part 6

ASP.NET Core Template Pack

Introducing downr: A simple blogging engine in ASP.NET Core with support for Markdown

mDocs: Building a project documentation using Markdown View Engine

An introduction to ViewComponents – a login status view component 

Configuring .NET Core Applications using Consul

Visual Studio 2017 and Visual Studio 2015 with .NET Core

Smarter build scripts with MSBuild and .NET Core

Demo: Azure Application Insights & ASP.NET Core 1.1

Application Insights is an application performance service used to monitor live web applications, detect anomalies, and perform analytics. In this community stand up, Damian went over some  Azure Application Insights features he added to the live.asp.net.
Damian shared how he is using App Insights to log application lifetime events on the live.asp.net site. He did this by creating an ASP.NET Core startup filter which is a piece of code that you can run during application start without having to add it to  Startup.cs.

By adding the code above you can view logs on Azure of every time your application starts or stops.  App Insights also supports collecting data locally; to learn more about this please see the community standup from 9/18/2016 or  Hanselman’s post on this topic.

capture

This week the team spent time going over some interesting  features in Azure App Insights and, how you can use them to gather details logs about an application. You can learn more about the Azure App Insights APIs here.

Happy coding!

 

 

 

Updates to Web Tools in Visual Studio 2017 RC

$
0
0

Today we announced a new update to Visual Studio 2017 RC that includes a variety of improvements for both ASP.NET and ASP.NET Core projects. If you’re already installed Visual Studio 2017 RC, you will be notified of the available update automatically. Otherwise simply install Visual Studio 2017 RC to get the latest updates. Also, if you’re willing to help us improve our tools by giving feedback on features and ideas before we ship them, let us know who you are.

Below is a summary of improvements this update brings:

  • Workload Updates
    • .NET Core graduated from being a Preview workload to a standard part of the “ASP.NET and web development” workload
      image
    • If, you only want to build .NET Core apps, there is a dedicated “.NET Core cross-platform development” workload available in the “Other Toolsets” category of the Visual Studio Installer.
    • The capability to open existing MVC4 projects is now available as an optional component for the “ASP.NET and web development” workload.  If you do not choose it during installation, you will be prompted to install it when you open an MVC4 project.
      image
  • You can now remote debug .NET Core apps running on Linux over SSH from Visual Studio using the Attach to Process dialog. See the debugger team’s blog post for details on how to enable it.
    clip_image004
  • When debugging JavaScript running in Chrome, Chrome now launches with all of your extensions and customizations enabled once you log into the Chrome instance launched by Visual Studio.

Thanks for trying out this latest update of Visual Studio 2017! For an up to date list of known issues see our GitHub page, and keep the feedback coming by reporting any issues using the built-in feedback tools or via Twitter.

Notes from the ASP.NET Community Standup –January 24, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

Quick Note: Scott Hanselman will be doing a blog post on the Docker 3 hour ASP.NET Community Standup from 1/17/17.  We will add a link to his post as soon as it is available.

ASP.NET Community Standup 1/24/2017

Community Links

Dockerizing Nerd Dinner: Part 2, Connecting ASP.NET to SQL Server

Building microservices with ASP.NET Core (without MVC)

An In Depth Guide Into a Ridiculously Simple API Using .NET Core

File Upload API with Nancy, .NET Core in a Shockingly Small Amount of Code

Power BI Tag Helper: Part 1 – Power BI Publish to Web

Power BI Tag Helper: Part 2 – Power BI Embedded

New Year, New Blog

Custom Project Templates Using dotnet new

ASP.NET Boilerplate: Quartz Integration

Working with Multiple .NET Core SDKs – both project.json and msbuild/csproj

Working with a Distributed Cache in ASP.NET Core

Reloading strongly typed options in ASP.NET Core 1.1.0

Project.json to MSBuild conversion guide

ASP.NET Core Workshop

Questions and Answers

Question:  What is the release status for Kestrel?

—  Kestrel has been out for a while we released 1.0 in June 2016, 1.1 November 2016,  and we are working on 1.1.1 and 1.4 servicing release for LTS and current due in February.

Question:  Can you share success stories of customers using ASP.NET Core for large traffic websites?

— We do know of customers who have deployed it successfully.  However, we don’t have any that we can share publicly.

Question:  Is it safe to use Kestrel in production ?

—  Yes it is safe to use Kestrel in production.  However, we don’t recommend Kestrel being used as an edge server(don’t use it directly exposed to the internet).

Question:  Is ASP.NET Core supported in Visual Studio 2013?

— No, ASP.NET Core is not supported in Visual Studio 2013.  Today, ASP.NET Core  is supported in Visual Studio 2015 using  project.json tooling and Visual Studio 2017 using csproj tooling.

Question: What is the recommendation for doing ASP.NET Core authentication?

— In a post by Mike Rousos he goes over in detail  how to get started with ASP.NET Core Authentication using IdentityServer4.

Our next standup will be on February 7.  Thanks for watching, and happy coding!

Announcing Continuous Delivery Tools for Visual Studio 2017

$
0
0

Posting on behalf of Ahmed Metwally

Visual Studio Team Services enables developers to create build and release definitions for continuous build integration and deployment of their projects. With a continuous integration and deployment configured, unit tests automatically, assuming the build succeeds and the tests pass, the changes are automatically deployed run on every build after every code push.

To better support this workflow in Visual Studio, we released an experimental DevLabs extension, Continuous Delivery Tools for Visual Studio with the Visual Studio 2017 RC.3 update. The extension makes it simple to automate and stay up to date on your DevOps pipeline for ASP.NET and .NET Core projects targeting Azure App Services and Azure Container Services. You will instantly be notified in Visual Studio if a build fails and will be able to access more information on build quality through the VSTS dashboard.

The Configure Continuous Delivery dialog lets you pick a branch from the repository to deploy to a target App service. When you click OK, the extension creates build and release definitions on Team Services (this can take a couple of minutes), and then kicks off the first build and deployment automatically. From this point onward, Team Services will trigger a new build and deployment whenever you push changes up to the repository.

clip_image002[4]

For more details on how the extension works please check the Visual Studio blog. To download and install the extension, please go to the Visual Studio Gallery.

Microsoft DevLabs Extensions

This extension is a Microsoft DevLabs extension which is an outlet for experiments from Microsoft that represent some of the latest ideas around developer tools. They are designed for broad use, feedback, and quick iteration but it’s important to note DevLabs extensions are not supported and there is no commitment they’ll ever make it big and ship in the product.

It’s all about feedback…

We think there’s a lot we can do in the IDE to help teams collaborate and ship high quality code faster. In the spirit of being agile we want to ship fast, try out new ideas, improve the ones that work and pivot on the ones that don’t. Over the next few days, weeks and months we’ll update the extension with new fixes and features. Your feedback is essential to this process. If you are interested in sharing your feedback join our slack channel or ping us at vsDevOps@microsoft.com.

imageAhmed Metwally, Senior Program Manager, Visual Studio
@cd4vs
Ahmed is a Program Manager on the Visual Studio Platform team focused on improving team collaboration and application lifecycle management integration.

Viewing all 311 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>