.NET Core Global Tools Configuration

TL;DR:

Nate McMaster’s post provides a really nice and detailed explanation on how to configure your csproj to build your app as a tool.

1
2
3
4
5
<PropertyGroup>
<PackAsTool>true</PackAsTool>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
</PropertyGroup>

Build with dotnet pack (creates NuGet package to be published)

Nice to test by setting up your own NuGet.Server


Configuration

The above is quite simple and works as expected. However, some projects might require more configuration.

I wanted to package an existing console app as a .NET global tool, whilst still keeping the current console apps (multi-target) capabilities. So my original configuration was the standard multi-target specification:

1
2
3
<PropertyGroup>
<TargetFrameworks>netcoreapp2.1;net472</TargetFrameworks>
</PropertyGroup>

After adding the <PackAsTool>true</PackAsTool> to this configuration the NuGet package failed to build, stating that …Microsoft.NET.PackTool.targets(32,5): error NETSDK1054: only supports .NET Core…

I really didn’t want to create a new project, so I eventually got the NuGet package building by splitting out the PackAsTool into a specific .NET Core 2.1 PropertyGroup:

1
2
3
4
5
6
7
<PropertyGroup>
<TargetFrameworks>netcoreapp2.1;net472</TargetFrameworks>
</PropertyGroup>

<PropertyGroup Condition="'$(TargetFramework)' == 'netcoreapp2.1'">
<PackAsTool>true</PackAsTool>
</PropertyGroup>

The package now contained the console in their respective framework’s lib folder, as well as the netcoreapp2.1 in the tools folder. This package was hosted successfully, but could NOT be installed.

The final result was to build separate NuGet packages. This also meant giving them different names. So I just used a random made-up condition (GlobalTool) and kept the original PropertyGroup with its default name, and specifying my tool with an explicit netcoreapp.1 framework a new PackageId:

1
2
3
4
5
6
7
8
9
<PropertyGroup Condition="'$(GlobalTool)' != true">
<TargetFrameworks>netcoreapp2.1;net472</TargetFrameworks>
</PropertyGroup>

<PropertyGroup Condition="'$(GlobalTool)' == true">
<PackAsTool>true</PackAsTool>
<TargetFramework>netcoreapp2.1</TargetFramework>
<PackageId>Foo-Tool</PackageId>
</PropertyGroup>

Note the use of Condition="'$(GlobalTool)' != true" instead of Condition="'$(GlobalTool)' == false" made my default build & pack use that PropertyGroup.

It’s also possible to add (override) any other NuGet metadata properties (i.e. Description) in these sections.

Now my pack process looks like this:

dotnet pack  
dotnet pack /p:GlobalTool=true

This results in 2 NuGet packages, which are both uploaded and can be installed independently. You can then install the global tool with dotnet tool install -g foo-tool

Because the ToolCommandName defaults to the name of the assembly name, you are still able to use it with foo.


Conclusion

I was pleasantly surprised with the flexibility of the csproj configuration. Making this possible took some digging and experimentation, but good exercise to step outside of the standard tooling.

It’s still very early for the .NET Core global tools, but still awesome and can see this being widely adopted soon.

Xamarin Forms Chat Bot

Xamarin Dev Days in Cape Town introduced me to Xamarin Forms, which basically allows you to share C# code accross native Android & iOS apps.

This app was targeted towards speech recognition. Having my own Android device and coding on Windows, I could really only build and test on Android. However, still worth keeping in mind that eventually this will need to result in an iOS app as well, so all NuGet packages and libraries will have to be usable in both projects.


App Design

Getting the hang of which elements are usuable where/when was really difficult at first, but with lots of research online I started getting used to it. Being comfortable with SPA two-way binding frameworks, the INotifyPropertyChanged implementation seemed very cumbersome on every page for all bindable properties.
After some digging I found Fody PropertyChanged which did all the heavy lifting for me! It really was as simple as installing the NuGet package, adding a new FodyWeavers.xml file to both iOS & Android projects, then finally just implementing the INotifyPropertyChanged on my VM with its single property:

1
2
3
4
public class MasterViewModel : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
}


Speech-to-Text & Text-to-Speech

After the pages were ready, next up was the implementation of speech-to-text (stt) and potentially text-to-speech (tts). Started off by trying some Xamarin Forms Components in the hopes to share all its code as well. Some of it worked, but often fell short in certain areas or mobile versions. I think it’s very difficult to keep those components 100% compatible with all platforms, especially with the different OS APIs continuously evolving as well.

Keeping the domain logic in the shared project, I needed a way of triggering the native activities and retrieving its results. Xamarin Messaging Center is a pub/sub service which allows to fire of events without needing to know where it will be executed. This worked well, but eventually became really messy keeping track of multiple subscriptions and difficult to debug at times.

The final refactor of this was to remove the messaging center and use the Xamarin Dependency Service. In order to create a better conversation effect between tts & stt in Android, the events could be queued with the help of Android’s Utterance Progress Listener.


Dialogflow

Formerly known as API.AI, Dialogflow is a brilliant free artificial intelligence service for converting text into context. You can train your own agent and integrate it with multiple existing systems or, as in this case, with custom applications.

There is actually an API.AI Xamarin Component which worked pretty well, but since Google acquired API.AI and changed it to Dialogflow, a new version for Dialogflow V2 API has been released. At the time of writing (2018/01/26), no Dialogflow SDKs exist for Xamarin with API V2, which to be fair, is still in BETA.
The main difference was the authentication.

  • V1 authenticates with an access token
  • V2 authenticates with Google

With the V1 access token you could easily authenticate from the mobile app using the API.AI library. The authentication with V2 requires a bit more configuration. There are various ways to implement the Google OAuth 2.0, however for my purpose I wanted to authenticate to Dialogflow on behalf of the user, NOT using the user’s own credentials. This meant that I had to continue with the OAuth 2.0 Service Account implementation. That involved generating a certificate, so I ended up creating a stand alone Web API.


Web API

I went with a lightweight ASPNet Core 2 Web API, which only consisted of 1 endpoint (IntentsController) to facilitate the communication between mobile and Dialogflow.

Following the Google service account .Net OAuth 2.0 guide, the implementation was pretty straight forward.

In order to communicate with this API, I created a new .Net Standard 2.0 project that utilized the basic .Net HttpClient.aspx). It didn’t have to be in a new project, but I thought it best for reusability & testability, and also a small awesome experiment to see how easy it was to reference my own .Net Standard project. Turns out it was straight forward as expected, and everything just worked together nicely.

ECMAScript vs TypeScript

I was tasked recently to investigate a tech upgrade, for an existing system that started about 5 years ago.
That included various aspects of the system, however I will only be focusing on the language investigation for the purpose of this post.

App initial notes:

  • It was one of the company’s first SPAs
  • Written in TypeScript version-very-old
  • All the devs code in Visual Studio (server side as well) for Windows

Keep in mind that I don’t know much of this system’s critical domain. Also I have worked on a fair spread of apps using TypeScript / ECMAScript 2015 / good ‘ol JavaScript (ES5). So hopefully that counts in my favour of having an unbiased opinion, initially, at least…

New scaffold

After deciding on the framework, I set of to start a new app full of piss and vinegar.
Almost immediately I was faced with the question: ES2015 or TypeScript ?

No problem, let’s go with the current app’s language and stick with TypeScript. After all that should make our lives somewhat easier by just copying the current files and leaving their format as *.ts. Sure we still have to adapt some of the actual code later on, but for now let’s just get it to transpile.

Now if you have ever tried to upgrade any application’s framework / runtime engine / target platform, which is ~ 5 years out of date, you will know this is never just a simple flip of the switch. The rabbit holes are deep and dark, but eventually you might find some light at the end of that tunnel. I was then left with a couple of broken / commented out references to be included.

3rd Party Libraries

Right, scaffold complete. Next up: include some packages.
Almost all applications and packages has some dependencies on other libraries, and should really use some kind of package manager or CDN today. This however, was not commonly used or available a few years back, with most packages simply checked in with their source control.

We have since come a long way with the help of Node & npm, and I was going to take full advantage of that.
On previous projects I have used tsd, which now had a big warning: DEPRECATED: TSD is deprecated, please use Typings. It was never my favourite package manager, with a lot of packages missing or often out-of-date definition files, so no tears shed.

Fair enough, on to Typings it is then. However this also has a deprecation notice regarding TypeScript@2.0 and stating: …some definitions on DefinitelyTyped may not work with the Typings approach because of new TypeScript features…. Red flags raised.

ECMAScript 2015 Prototype

At this point I was exhausted and thought about possibly converting over to ES2015 JavaScript. The syntax is mostly the same and you get most of the benefits as you would in TypeScript with the tools available today. Obviously you lose the strict typing (except for types declared with any which this project had plentiful), but then you also gain the dynamic flexibility (which I personally really like).

As a prototype, I updated the scaffold for ES2015 and used the Babel transpiler. After a quick copy-paste & minimal find-replace, the code was actually compiling. Not running yet, just successfully transpiled. Progress at last.

Package manager round 2: simple. npm install those bad-boys, configure some modules, and it was done. A working upgraded example we can immediately continue working on, but would this actually be acceptable for the team?
Initially during the demo, some team-members didn’t even notice the language change. Most seemed unfazed with this change, others actually excited, and a couple understandably concerned.

Conclusion

Eventually the move to ES2015 was accepted and embraced by the team. With the preferred editor now being VS Code, some can even do away with full Visual Studio and Windows if they prefer. They already have the knowledge to continue with the proper ECMAScript standards, and existing good procedures (PR & QA) in place.

Debugging TypeScript has always been another massive pain. For an established existing application, it’s preferred to have browser debugging available when having to make domain functionality changes. In the end, ease and practicality outweighed the benefit of type-safety.

Having a look at the ECMAScript 2015 Compatibility Table, we could probably start doing away with ES5 completely in the near future. Deprecated package managers and incompatible Typings features, causing builds to fail and multiple headaches, could be the start of TypeScript’s death.

SignalR

SignalR On OWIN & JavaScript

The purpose of this post is to show the different SignalR implementations available for an OWIN WebApi & JavaScript.


OWIN WebApi

NuGet Packages used:

  • Microsoft.Owin.SelfHost
  • Microsoft.AspNet.SignalR.SelfHost
  • And all related dependencies

To get started, follow the tutorial here

In the Startup class, where the OWIN configuration is set, we must include the SignalR configuration to run in the same pipeline.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public void Configuration(IAppBuilder app)
{
var httpConfiguration = SetupHttpConfiguration();
app.UseWebApi(httpConfiguration);

app.Map("/signalr", map =>
{
// Setup the cors middleware to run before SignalR.
// By default this will allow all origins. You can
// configure the set of origins and/or http verbs by
// providing a cors options with a different policy.
map.UseCors(CorsOptions.AllowAll);

var hubConfiguration = new HubConfiguration
{
// You can enable JSONP by uncommenting line below.
// JSONP requests are insecure but some older browsers (and some
// versions of IE) require JSONP to work cross domain
// EnableJSONP = true
};

// Run the SignalR pipeline. We're not using MapSignalR
// since this branch already runs under the "/signalr" path.
map.RunSignalR(hubConfiguration);
});
}

The following 2 hubs purely to show the difference of how they will be used in the front-end.

  • Generated Proxy Hub

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    public class GeneratedProxyHub : Hub
    {
    // http://stackoverflow.com/questions/31169509/signalr-how-to-truly-call-a-hubs-method-from-the-server-c-sharp
    private static IHubContext context = GlobalHost.ConnectionManager.GetHubContext<GeneratedProxyHub>();

    public override Task OnConnected()
    {
    var qs = Context.QueryString["user"];
    Send("GeneratedProxyHub", qs);
    return base.OnConnected();
    }

    public void Send(string name, string message)
    {
    Clients.All.addMessage(name, "Generated Proxy Hub: " + message);
    }

    public static void Push(string name, string message)
    {
    context.Clients.All.addMessage(name, "Generated Proxy Hub: " + message);
    }
    }

There is nothing special in the setup between them, except for the IHubContext in the Generated Proxy Hub (above). That is only used when calling Push() from a controller as explained later. So the IHubContext can be removed safely without breaking the normal functionality.

  • Manual Proxy Hub

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    public class ManualProxyHub : Hub
    {
    public override Task OnConnected()
    {
    var qs = Context.QueryString["manualProxyUser"];
    ManualProxySend("ManualProxyHub", qs);
    return base.OnConnected();
    }

    public void ManualProxySend(string name, string message)
    {
    Clients.All.addManualMessage(name, "Manual Proxy Hub: " + message);
    }

    public void ManualProxyPush(string name, string message)
    {
    Clients.All.addManualMessage(name, "Manual Proxy Hub: " + message);
    }
    }

Front-End

The 2 front-end implementations are: (best explained here)

  • Generated Proxy
  • Manual Proxy (without generated proxy)

The major difference being:

  • Generated Proxy: Gets the hub’s context in a .js script generated by the SignalR WebApi on page load via <script> tags from the index.html

    1
    2
    <script src="bower_components/signalr/jquery.signalR.js"></script>
    <script src="http://localhost:9000/signalr/hubs"></script>

    Usage:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    var generatedProxy = function() {
    // By default, the hub location is the current server
    // if you are connecting to a different server, specify the URL before calling the start method,
    $.connection.hub.url = "http://localhost:9000/signalr";

    // Declare a proxy to reference the hub.
    var generatedProxyHub = $.connection.generatedProxyHub;

    // Create a function that the hub can call to broadcast messages.
    generatedProxyHub.client.addMessage = function (name, message) {
    addMessage(name, message);
    };

    // add querystring to send data to server on connect
    $.connection.hub.qs = { 'user' : username };

    // Start the connection.
    $.connection.hub.start().done(function () {
    $('#sendmessage').click(function () {
    // Call the Send method on the hub.
    generatedProxyHub.server.send($('#displayname').val(), $('#message').val());
    // Clear text box and reset focus for next comment.
    $('#message').val('').focus();
    });
    });
    }
  • Manual Proxy: Only accessing the hubs when needed. Notice the custom createHubProxy("ManualProxyHub")

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    var manualProxy = function() {
    // declare connection and proxy to communicate with the hub
    var hubConnection = $.hubConnection("http://localhost:9000/signalr");
    var hubProxy = hubConnection.createHubProxy("ManualProxyHub");

    hubProxy.on("addManualMessage", function(name, message) {
    addMessage(name, message);
    });

    // add querystring to send data to server on connect
    hubConnection.qs = { 'manualProxyUser' : username };

    hubConnection.start().done(function () {
    $('#sendManualProxyMessage').click(function () {
    // Call the Push method on the manual proxy hub.
    hubProxy.invoke('ManualProxyPush', $('#displayname').val(), $('#message').val());

    // Clear text box and reset focus for next comment.
    $('#message').val('').focus();
    });
    });
    }

Other small differences include dynamic function calls vs event binding and invoking methods.

SignalR access from ApiController

As requests to an ApiController has a HTTP Context, we need to get the context for the Hub’s WebSocket protocol.

For ease of use I have created a static IHubContext in the Generated Proxy Hub (above) which can be used when calling the Hub’s methods from the controllers, as seen here.

1
2
3
4
5
6
7
8
public class TestController : ApiController
{
public string Get()
{
GeneratedProxyHub.Push("WebApi", "Hello World!");
return "Result sent to request and connected SignalR hub clients";
}
}

Resources

Demo Source Code: https://github.com/johan-v-r/SignalR

http://www.asp.net/signalr/overview/deployment/tutorial-signalr-self-host

http://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client

http://stackoverflow.com/questions/31169509/signalr-how-to-truly-call-a-hubs-method-from-the-server-c-sharp

https://stackoverflow.com/a/31063193/5954805