Bash on Windows is Beta…

So last night I was setting up a machine and playing with a node project. I decided to jump into Bash on Windows to do node development and ran into a weird error. “ERROR in EINVAL: invalid argument, uv_interface_addresses.


After doing a little Googling I found some others posting this issue. What the problem stems from is related to iterating over network interfaces. When these scripts do this in Bash it apparently doesn’t work. I was able to switch to my normal command line with node install and run the same application though.

Figured I would share in case anyone else is struggling. We all need to remember this is Beta software and report the issues so they eventually get fixed.

In this case the issue is with os.networkInterfaces() call in the code I’m using.

My nodejs app looks like this and runs on nodejs on windows but fails on bash.

const os = require(‘os’);

var interfaces = os.networkInterfaces();


Video of issue:

Microsoft Paint in 3D???

Double post day. I must be getting ambitious…

So some videos popped online today regarding a new version of Microsoft Paint.

Why would Microsoft add 3D? Well my thought on this is two fold. How else are normal users going to create 3D content for the HoloLens? Also if rumors are true regarding an all-in-one Surface to be announced on October 26th, 2016 then they need some apps to show off on it. I think both use cases are awesome. The more 3D content creation we can get the better our AR/VR experiences will be. Everyone has to remember that Microsoft has opened their Holographic platform to OEMs so more devices will be coming as well as existing ones that will support that platform over time.

I’m working on figuring out how to get early access to play with it on my Surface Book right now. I’d love to play with it.

NOTE: Microsoft if you are listening give me a preview build to play with and I’ll do some screencasts with it.


Between October 5th and 7th Oculus held their Oculus/conn3ct event. This covered where Oculus is today as well as where it may go in the future. Mark Zuckerberg jumped on stage and  even demoed some of their work. Zuckerberg made a comment that VR is a great platform to put “people first”. This comment is great for VR. My thought on this is everything with VR is about experiences. This comes in the shape of Social and even just Solo experiences where you have to put the user first.

Zuckerberg then jumped into a virtual reality demo showing of a social app. This included avatars with expressions and moving hands. The hand movement I easily understood since they were using the Oculus Touch. Even mouth movement is easy since it is off of audio. I’m not sure how they did facial expressions though. Unless there is a lot of extra features in the sensors that ship with it I don’t see how they pulled that off.

Some of the interactions shown in the demo were rather impressive. Through the use of context menus attached to the wrist of each hand a user is able to do things like answer a Facebook call or share something on Facebook. One example they showed was sharing a video. One could easily see this turning into a whiteboard and giving users a way to work remotely. This plays in very well to the vision Microsoft has showed for AR and VR working together. It will be interesting to see where this all ends up and when it is mainstream.


Oculus launched a few new pieces of hardware as well as announcing some things they are working on. The first piece of hardware they launched is Oculus Touch which has been the missing component for Oculus. The HTC Vive and PlayStation VR both launched with controllers out of the gates. It seems like it has been forever for Oculus to launch theirs. Oculus also launched a in-ear headset for the Rift as well. The third piece of hardware is additional sensors. To get room scale with the Rift you will need three sensors. The Oculus comes with one and the Touch controls do as well, but for room scale you need to buy one more. As of today only two pieces of hardware are available: Oculus Touch at $199.00 US and the Oculus Earphones at $49.00 US. The Sensors haven’t seemed to hit the store for pre-order yet but are supposed to cost $79.00.

The Oculus team discussed working toward a standalone version of the Oculus Rift as well. They also talked about a future where we would be untethered from the PC and yet use a PC to power it. This sort of plays into my vision of where I think Apple may go with Augmented Realty.


It is awesome living in the tech world today. We are on the cusp of some great things in Virtual Reality, Augmented Reality, and AI. I can’t wait to see where this all lands in the next five years. I really hope to be part of it for sure.

Google is becoming Microsoft

So earlier this week Google had their “Made by Google” event. I figured I would put down in writing my thoughts on the event and where Google is going. This post will be focusing on the transformation that Google is starting to go through. I also plan a post on the Pixel itself as well as Daydream but that will be a different post.

I’ve seen a number of posts floating around the internet that “Google is becoming Apple”. Seems like a bit of click bait to me. Google is by no means becoming Apple. They don’t fully control the Android ecosystem. What they are acting like is Microsoft. Let that sync in for a bit…..


With this event Google announced their first fully designed by Google Phone. HTC happens to manufacture it but that is no different than the iPhone which is primarily built by Foxconn. Like Microsoft though Google will be competing not only against Apple but also against OEM’s that utilize the Android operating system. This to me is no different then Microsoft and they Surface Pro and Surface Book.

Apple is a pure Hardware company that controls its full ecosystem which Google will not be able to do with Android since they open sourced it. Google will never be a hardware company full stop. Their primary focus is on their Ad networks and they Google Play ecosystem. Apple is not and probably never will be at that scale. Apple will continue to be a hardware company for years to come, while Google will be a services company with hardware and OEM vendors.


A lot of the event was talking about Google Assistant and AI in general. This play is very similar to Microsoft. At Microsoft Ignite Satya Nadella spoke about “democratizing AI”. Both Google and Microsoft have their own AI plays going on. Microsoft is approaching it with their Cortana Assistant while Google has built their own as well. Machine Learning and Deep Learning are being attacked directly by both Microsoft and Google. This is not a place Apple has done really well in. I personally use an iPhone and iPad but Siri is not as intelligent as either Cortana or Google’s platforms.

Apple really is a hardware not software company. Just look at iTunes… They do great things with hardware but in the software world they just are not there. Google on the other hand is very well versed in software just like Microsoft. Both Microsoft and Google play in opensource quite a bit as well. Some can say Apple is going there with Swift, but it is still very early where Google and Microsoft have very mature software platforms.


Google is becoming Microsoft as much as Microsoft is becoming Google. The moves I’ve seen Microsoft in their new dedications to open source is much more of a Google style play. They are focused themselves on being a Cloud based company on every platform. Even Google does this with their applications on iOS. It is clear to me that Microsoft and Google are in a different category then Apple. I hope people stop comparing Google to Apple as they are two completely different things. I have no problem though with compairing Microsoft and Google.

Do you have some thoughts on this? If so reply and share your thoughts. Have a blog about it share a link as well. I’d love to hear what others are thinking.

What if…Apple Augmented Reality

So recently Tim Cook has spoken a lot about augmented reality and that Apple is heavily investing in it. Based on their new iPhone and some of the technology I have some ideas of how they potentially may approach AR.


What if Apples approach is not a fully encompassed device like Microsoft HoloLens but rather just a visor with sensors that talks to your iPhone as the central compute. How you say? Well they did launch a new wireless chip they designed.


It may be feasible that they have some new secret sauce that would allow you to carry a light weight visor to experience augmented reality. I for one would love to have a light weight headset. One of the challenges with adding compute power to the headset like the Microsoft Hololens is that you are limited by battery as far as how long it can last as well as size. I think it might be feasible to have a light weight device that only does sensor input and display. This paired with your iPhone as the compute may be a very interesting combination.

What about power on the go?

What if the visor had a case that charged the visors when you put them away like the Apple Airpods?


Again interesting concept. Maybe everything that is being done now is just steps to a AR visor from Apple.

I hope that some of my thoughts are true. I really want to see Augmented Reality mainstream.

P.S. Apple if you are looking for a developer to test out your AR let me know Smile

Blender and Unity Part 1

As some of you know I’m focusing a lot of my personal time on Augmented and Virtual Reality. To create experiences in these platforms that are truly engaging you have to add 3D and animation or movement to your applications. In this blog post I will walk you through using Blender a free utility to create some 3D assets with a basic animation and import it into Unity. I will also show how to wire up the animation in Unity. In the end I will show what our creation looks like through the HoloLens.

To get started I first must cover some basics of how to navigate Blender. Interactions in blender utilize a three button mouse. The Right Mouse button is primarily used for selection while the left mouse button is for confirming things like scaling or rotation. The Middle mouse button is used for rotation, zoom and such.

Now open Blender. We are first going to delete Items in Blender to start fresh. Select the Lamp, Camera, and Cube and delete them by pressing the Delete key on the keyboard and then clicking Delete in the dialog that appears.


Add a UV Sphere

Next I switch to front view by pressing the #1 key on the number pad (Not the #1 above your qwerty section of your keyboard. I also ensure I’m in Ortho mode by pressing the #5 key on the number pad.

Next we will add a UV Sphere. There are a number of ways to do this. One is to click the Add menu below the view port and select Mesh -> UV Sphere. A shorter way to achieve this is to press Shift+A and a menu will appear in your view port.



Next we will duplicate the UV Sphere to create the head of our BB8. To do this we can use the Object Menu or just press Shift+D. This will create a duplicate move your mouse and you will see the object that is created. To place you left click. Next we will Scale this to make it smaller and place it to be the head of BB8.


Once you scale and position the head we will then add a cylinder for it robotic eye. This is done by pressing Shift+A and selecting Mesh->Cylinder. We will need to scale it by pressing “S” and making it the appropriate size. We also need to rotate it by pressing “R”. To rotate on a specific axis you can then press X, Y, or Z. In the end we should have something that looks like this.


Next to make our lives a bit easier we will parent our objects. First we will parent the Cylinder to the Head. To do this first make sure you have nothing selected. Then select the Cylinder by right clicking. The holding the shift key right click on the head. To parent you then press Control+P and select Object from the dialog. This will ensure that any manipulation to the head object will affect the robotic eye we have created. We now can parent the head to the body following the same process.

We are now ready for some animation.

The first animation we are going to do is just an Idle state where the head rotates back and forth. To do this we need to create some key frames. In my case I’m setting my view to Front (#1 on Number pad) I also rotated the cylinder to the left by pressing “R” and then typing 180 degrees. At this point it is time to add Key frames. To do that what you do is open the object menu on the right side of blender like below.


To insert a key frame on location, rotation, and scale you hover over the area in the menu and press “I”. If you successfully added a key frame you should have something like this.


The yellow tells you the value is key framed. Next we select a frame in the future like frame 20.


We now rotate the head by pressing “R” on the keyboard followed by “Z” this will rotate on only the Z-Axis. We position where we like and we will see the transform menu is now Green meaning the values have changed but are not key framed yet.


Again we hover over each value and press “I” to Key Frame them. Next we will move the head back and add one more key frame. In the video below I will show an alternative way to add the last point by using the Dope Sheet and copy pasting information.


Import into Unity

To import into Unity is very simple. By default you can just drag the Blender file into Unity’s Asset folder. This will automatically use the FBX Export from Blender. Once this is done you can now drag the asset into your scene.


This will not automatically animate our character at this time. Let’s first select our bb8 model and in the Inspector go to the Animations tab. Here we will see our animation but it has way to many frames. We need to trip it by changing the end time to match our last movement of our animation. We can also change the name here to Idle. We will use this name in the future as well. You also will want to check the loop time checkbox.


We now need to add an Animator Controller to our Assets by clicking Assets->Create->Animator Controller. You can also do this by Right Clicking in the Assets window and selecting Create->Animator Controller. Now you can double click on the Animator controller it created. This will open the Animator pane.


In the Project Asset folder click the right arror on our model. You should see something like this.


You should now see the Idle Animation. Click and drag it to our Animator pane. It will automatically attach to Entry.


Now we need to apply the Animator to our Model in the Scene to do this you can drag the animator from our Project folder to either the object in the scene or to the Inspector spot for the animator. Now if you hit play our animation should be playing.

Congratulations you have now created your first animation. In a future post I will make it so we can control another animation from code.

Here is a full video tutorial of this but with additional steps like adding basic materials to our model.