Quality code and lots of coffee

Author: joe.vandebilt (Page 1 of 3)

Getting Google Assistant to do just about anything

So Christmas is right around the corner and I was helping put up the decorations. One decoration in particular were some window lights that have multiple settings, you can control the settings via a switch on the cable or you can download an app and connect to it via bluetooth. Pretty nifty.

Our tree on the other hand is on a smart plug, which I can control via google assistant. Since google assistant can chain commands now it’s pretty easy to just tell her to “Let it snow” and she’ll turn on the tree, dim the lights, start playing a fireplace video on the TV and play my Christmas playlist on spotify – but those pesky lights in the corner won’t switch on til I boot up the app, start my bluetooth and connect. This would obviously not do – and I had to set about trying to bridge the gap.

The first thing to note with this is that Google Assistant doesn’t have an SDK (unlike Alexa which has events in AWS) so you’re a little limited. What I basically wanted was a simple command, such as “Hey Google, Switch on the window Lights” and have the window lights come on. So I did some research and found that although there is no SDK – there is an online service named IFTTT (If This Then That) where you can hook into google assistant. So in order to activate my lights I’d need to write an IFTTT applet.

The plan was simple. I’d write an applet that would send a signal to my Raspberry Pi – I would then write a script on my Pi that would make use of the bluetooth radio to contact the lights, I knew this could be done through python so I performed the actions on my phone, snooped the bluetooth actions and recreated them. Now all I needed was a trigger.

Applets are written throught the IFTTT Platform, The platform has an enterprise license and a personal license – so producing an applet for myself was free. I created as service for myself and went to Applets to produce one.

Clicking new applet the process was fairly straightforward, an applet is made up of a Trigger and then one or more Actions, so our Trigger we knew was google assistant. Once Selected you have a few choices to build your command structure, I went with a simple command and text ingredient, this is the phrase you say after “Hey Google” and the dollar sign $ is a wildcard of what you say afterwards, so I simply set mine to

Pi Function $

I chose Pi Function because for some reason my assistant struggled to make out “custom function” and kept thinking I was after info on Canoes. After you’ve set up a command format you can tell it to say something back to you and that’s the trigger all set up.

The action is a little more complicate, I figured the easiest thing to do for my Pi would be to get an API hit. I could host a php page fairly easily and my IP is fairly static, enough for Christmas at least. So I wanted something that would send a web request.

After searching the available Triggers I happened upon the Webhooks service, which sends a formatted request to any endpoint you like. When you set up the Webhook you can set the fields to be static values, or let your users select their own when you come to set up the applet. I elected the latter option as even though this was private, I didn’t want to list my endpoint explicitly.

The only ‘gotcha’ is passing the text ingredient – IFTTT won’t let you use the text ingredient unless it’s explicitly set in the applet settings, so for the content type and body of the webhook I set a JSON request including the text ingredient.

So as you can see I’m sending a JSON Post request to my endpoint, including a secure key so others can’t just spam my endpoint – and passing in a function param which is the text of the function I want to run.

Once the applet has been created – you should have the option to active the applet against your account. You may need to allow IFTTT to have some permissions against your Google account, but once added you should be able to set the correct endpoint.

Now all I had to do was write a PHP script to pick it all up. I installed apache, secured the connection with certbot and wrote a few scripts. I was essentially going to be making use of the php shell_exec function, so I just needed some scripts in the web dir to get me started. One of the important things was also writing a script to log the requests that came in, just in case I did start getting some abuse. My api.php looked like this

<?php

ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
header("Content-Type: application/json; charset=utf-8");

// Takes raw data from the request
$json = file_get_contents('php://input');
$data = json_decode($json);

if ($data->API_KEY == "OBFUSCATED FOR REASONS") {

    //Log the request
    $IP = $_SERVER['REMOTE_ADDR'];
    $Function = $data->Function;
    $output = shell_exec("./LogRequest.sh ".$IP." ".$Function);


    //Now we need to run out request
    $function = strtolower($Function);
    if ($function == "restart" || $function == "reboot")
    {
        shell_exec("./restart.sh");
    }
    else if ($function == "kodi")                                                                                                                                                                                                                
    {                                                                                                                                                                                                                                                
        shell_exec("./kodi.sh");                                                                                                                                                                                                                 
    }
    else if ($function == "lights")                                                                                                                                                                                                                
    {                                                                                                                                                                                                                                                
        shell_exec("python ./lights.py");                                                                                                                                                                                                                 
    }
}

?>

Another GOTCHA you may find is that the text ingredient from IFTTT is for some reason limited to single words, this was a pain but not impossible – because google can indeed chain commands so you can link a weird command like “hey google, turn on the window lights” and make it mean “hey google, pi function lights”

The most interesting thing for me, as you may have noticed in my script, is that you don’t have to stop there – with an API hook on a Raspberry Pi – a lot more of the world is available to you, I added commands such as Reboort or Kodi so that when there’s an issue and I can just do these actions without having to pick up the remote.

I honestly believe that through shell scripts you can accomplish a lot – and with this tech in place I can make

Stats Tracker API

The Stats Tracker API was a freelance project I undertook for a local browser-game development company.

To provide more in depth reporting to their customers they wanted a small client side library they could inject into their games that would track certain events; it would then send these events to an API which would present them to the end user via custom reports – so the project was broken down into 3 parts

API

An exposed API that would log the events from the game, this had to be versatile enough to accommodate data packets of various lengths and structures as the packet structure could not be identified until the event had begun to be processed

Administration Area

The owners of the Stats Tracker needed the ability to manage clients and games within the system. For this an administration area was created which would allow for the administrators to manage Clients, Games, Events for those games and define the parameters that those events would pass in. This would provide the foundation for the structure which the API packets would send.

Client Area

Clients using the system needed access to their own area with a locked down and limited view of their own products, but no more. As this was a multi-tenanted system it was vital that clients did not have access to other clients games or reporting.

The reporting section itself also needed to be able to provide custom views and timescales for the data collected and display it in a visual format. For this I made use of a charting library

Tech Stack

ASP .Net Core MVC

.Net Core as I’ve mentioned on this blog is incredibly versatile – and given the scope of the system I wanted more out the box than PHP provided. MVC provided a fantastic framework to build the API and accept packages and .Net Core Identity allowed the flexibility that was needed in order to build the Adminsitration and Clientside areas. Then because of the cross platform nature I knew that when the app was handed over it could be hosted anywhere. Razor syntax and view models also meant that web pages could be built with relative ease to serve the client facing system’s requirements.

Chart.Js

Chart JS is a free but powerful open source charting library making full use of HTML 5, although the documentation is lacking, Chart.Js can very easily be integrated into custom reporting to provide good looking visual graphs

Entity Framework

From the offset this project had very clear defined objects within the system. Entity Framework provides a fantastic interface between the database and the service layer of your system so that both the API and GUI can communicate with the database without needed to write the SQL yourself. However, in some cases with reporting I needed to write custom SQL in order to optimize queries, and EF allows this alongside it’s usual functionality.

MySQL

Not as bloated or expensive as SQL Server, MySQL provides a very clean straight forward and cheap alternative as a database provider; whats more is that there is Entity Framework support with MySQL so that it can be integrated easily into a Microsoft Tech Stack. Which means for the first time I was able to integrate MySQL with ASP .Net hosted on a Unix Server .

Fiaz is 40

I was commissioned to collaborate with a PR company and web designer to produce an online Guestlist for a high society party in celebration of our clients 40th Birthday. The site had to comply with a list of bespoke features.

  • Guestlist Management – including +1’s
  • Automated Invitation Generation (QR Codes)
  • Email Updates to guests
  • Chatbox for guests to communicate with host
  • Directions to venue and venue information
  • Photo Gallery and Media Management

The site was completed and accepted by the client ahead of schedule; used as intended throughout their birthday celebration with resounding success.

Server Architecture

Building an application is only part of the process; a platform to run your code on is just as important to know about. Environmental differences can cause unexpected bugs in your code and having a knowledge of of server architectures can be a vital asset to your tech stack. Servers need to perform a range of tasks, including manage domains and user groups, permissions, and run your applications – ultimately to provide a service to you.  These services can include web services like IIS or Apache, Databases such as SQL Server or MySQL, email services like Exchange or Postfix/Dovecot.

As well as having an in depth knowledge of desktop operating systems Windows XP – Windows 10, and Linux environments such as Ubuntu, Fedora and OpenSuse I have an extensive knowledge in both Unix and Windows Server. Windows server as always provides the advantage of managing everything through a GUI, giving your server management a little bit of ease, and being able to run some development tools natively is always a bonus.

Running a Unix server by comparison is a lot more hands on, but provides much greater flexibility when it comes to server architecture (As well as having the bonus of being free), and with the new territory of .Net Core Unix is becoming a more viable option in businesses. I am familiar with most Debian and Redhat bases systems – in fact this website is running on a CentOS system running in the cloud.

It is one thing to know that these services are available to you, but understanding the alternatives on both platforms, as well as how to set up and use these platforms is a valuable skill.

Source Control

Source control speaks for itself – it allows source code to be managed, maintained and worked on from multiple sources, keeping track of changes and allowing reverting to certain versions of code – or deploying certain versions of code.

The fundamentals of source control are the same, you create a working copy, make your changes, check your changes in and anything that conflicts must be resolved before committing. While there are differences between varieties of source control, a basic understanding can go a long way. My introduction to source control was using Subversion, setting up a subversion server on a remote server so that I could manage code between multiple developers throughout my university projects. From there my affinity for source control grew, in a professional capacity I completed complicated merges between branches after long running parallel developments, enforced repo cleanliness and managed integrity of branches.

Throughout my career I have become proficient in  using

  • Git
  • TFS
  • Subversion
  • Source Safe (yes, really)

Unit Testing

Any developer can give you a number of reasons why Unit Testing is important – however few actually do it and even if they do code coverage isn’tgreat. My knowledge of Unit Testing focuses on 3 areas:

  • Compile Tests (MSBuild, dotnet build)
  • Core Functionality Tests (NUnit, XUnit)
  • Front End Tests (Selenium, SoapUI)

Compile Tests may seem like an obvious step, but a remarkable number of people will commit a solution without checking if it builds. Post-build events or continuous integration can easily check whether or not your solution will compile, if it can run without encountering errors. I am a firm practitioner in the belief that the first step of any build test, should be “Does it actually build”.

Core Functionality Tests are what most will consider when writing Unit Tests, NUnit and XUnit are great ways to test core pieces of  functionality, making use of mock frameworks (Moq, Rhino) to test the core functionality of your code without any impact to any data sources you have. I aim to have my projects include unit testing to be run via build script or build server to ensure quality.

The front end is an often forgotten and untested area, but tools are available to test your API and your web interface. Tools such as Selenium can be programmed from within you unit tests to run against a running site and assert actions. SoapUI can fire formatted requests at multiple types of online APIs to ensure that any changes you’ve made to your service are unaffected.

Nick Infinity Islands – API & Back End

I was approach by Peg Digital to help with a new project they were working on for Nickelodeon. The new project would be a series of games for the Infinity Islands. As part of this game there would be an option for players to upload an image as part of a “Scoreboard” and Nickelodeon requested that these images be saved to a database, and be retrieved at a later point and displayed at user request.

The approach I took was to build an API and incorporate their current use of JavaScript by giving them my own JavaScript Client library to hit the PHP API in the back end. The API had to handle various tasks, passing back relevant information to the front end for the site to use, such as saving and retrieving images, getting gallery’s and in later days, serving reports on user interaction with the API. As well as this I also had to set up the Amazon Web Services host which runs this back end. This all had to be done with regular communication to the client, keeping everything secure to adhere to data protection when using the API, and overcoming new challenges when requirements changed or were added in.

The project faced it’s own challenges, after the API was set up to handle image capturing the storage space on the host server starting to reach it’s limit, and after live release the RAM limitations of the server meant some quick refactoring and code redesign needed to be implemented in order to get the server running smoothly.

The project was delivered on time, with positive feedback from the client.

.Net Core 2 – Cross Platform Code

When I started programming properly in 2012 during my degree, there were a few truths. C# and Visual Studio were for Windows. Python and Perl were for Unix and Mac OS was something I just didn’t want to ever touch.

Visual Studio also cost a bomb – I only had a copy because my University was kind enough to furnish me with a 2 year licence for VS2010 which I used in full, then just before my account was suspended managed to nab a copy of VS2013 which carried me til 2016. I tried making a few cross platform apps in the beginning, but unless I was using Mono or something far more basic like JavaScript – then cross platform wasn’t really a thing.

Lo and behold Microsoft go and change up their style – they’re now shipping free version of Visual Studio, and not only that but the community editions are actually quite powerful (this might have always been the case – but since I had free professional editions I didn’t look too hard). Either way I’m impressed with the level of features available in the community editions – Especially with it being free. Then a few months later one of my co-workers Rogue Planetoid mentioned that Microsoft were releasing the .Net Core standard – a cross platform SDK for visual studio, capable of being run on Unix, Mac and still natively on windows.

The framework

This might be old tech as of writing this as the .Net Core 2 standard is released, and I never bothered to give 1 or 1.1 a go – but I finally did get round to upgrading VS2017 Community and getting the SDK from the Microsoft site.  I won’t go into what it was I was working on because frankly that’s a bit of a lengthy conversation [My GitHub for the project], but it was effectively a console application. At the moment .Net Core 2 supports ASP Net websites and Console Applications. So unfortunately my bizarre love for windows forms isn’t yet supported. But I was keen to get my console app running on my Centos server.

First of all you can’t change an existing application over to a .Net Core app – or if there is I couldn’t see the option. So I had to create a new project and then port over my code. Thankfully this provided an excellent excuse to refactor my code. I did particularly enjoy that the code, for lack of a better term, just worked. I didn’t have any 3rd party NuGet packages or extra content, so the basic windows libraries could just be bolted on and the code compiled as normal. Within about 20 minutes I had completely ported over my applications, an hour after that I’d made it a little prettier.

Since I was finally moving my code over to the same server as the database I decided to remove the API calls and use a direct MySQL connector – now this meant that I did have to get a NuGet Package – specifically MySQL.Data this currently supports the standard .Net framework but it isn’t supported on .Net Core yet unless you get the RC or DMR version. I installed that, did some upgrades and compiled the app.

Setup on the Unix server

So – running it on Centos; I initially went and downloaded the 64 bit runtime binaries from the Microsoft blog local to my server, I then unzipped them and followed the generic instructions. Microsoft instructions tell you to unzip them and leave in your home directory for use but I wanted to put them in more of an application directory, so I did the following.

cd ~/ 
mkdir dotnet
cd ./dotnet
wget https://download.microsoft.com/download/5/F/0/5F0362BD-7D0A-4A9D-9BF9-022C6B15B04D/dotnet-runtime-2.0.0-linux-x64.tar.gz
tar zxvf dotnet-runtime-2.0.0-linux-x64.tar.gz
cd ../
mv ./dotnet /etc/

This then meant my .Net Core directory was at /etc/dotnet/… and I now needed to register the new application. Microsoft tells you to execute this in your command line but I found that each time you restarted your shell session it would forget what you’d set up, so in the end I added it to my local .bashrc file.

nano ~/.bashrc
#then at the bottom of the file added
export PATH=$PATH:/etc/dotnet

Save and now I could run any dotnet applications with the command dotnet such as “dotnet -h”

I did have some trouble on my first application run due to some missing libraries, but they were pretty easy to install through the usual package manager

yum install libicu libunwind

Package & Run my App

So I’m used to a console application building and dumping an executable in the output directory with an App Config and some, .Net Core uses JSON files and DLL’s for it’s binaries, though they shouldn’t be treat any different really, the main difference to factor in is that your unix installation doesn’t have a GAC – the GAC is the global assembly cache; when you run an application on windows, normally if the code references a DLL it’ll ask the GAC where the install path is, so it can be referenced and used as normal, even if that DLL hasn’t been shipped with the application.

Unix obviously doesn’t have a GAC – so when you try to run your application you need to make sure that instead of just moving your JSON and DLL files up to the server, you actually publish the application and move everything. To show you what I mean, below is the difference between the standard “Build” output of a .Net Core application vs the “Publish” output of the application.

The publish job packages up everything, including runtimes and referenced libraries, so in order for this to run on Unix, I needed to publish the application and move that output onto the server. Once it was on the server I could get away with just moving my main DLL up, but you must publish at least once or you may start to get runtime errors.

Once it’s all on your server, let it run.

dotnet ./ADFGX\ Server\ Module.dll

or if you want it to run in a background session kick it off with a screen

screen -dmS DotNetApp dotnet ./ADFGX\ Server\ Module.dll
screen -x DotNetApp

Conclusion

All in all I’m very pleased with the .Net core stuff, it’s downsized the number of IDE’s I need to have installed and means I can now start hosting some more windows technologies on my Unix server which should save me a few pennies as well.

Hopefully in the coming months we see Microsoft bringing out some more application types and looking forward to more NuGet support. But what I’ve seen so far of .Net Core seems really stable, very easy to set up and really easy to migrate your existing stuff over to.

Steam Link on a Raspberry Pi

A while ago I once again blitzed what was on my Raspberry Pi and started a new project using Retropie. Retropie is Debian based but has no desktop environment to speak of. You switch it on, it runs emulationstation, emulationstation runs other apps. It’s fairly basic but it also has an optional package for Kodi, so I could replace my bastardised Openelec box and my dedicated Raspbian box and find something in the happy middle ground.

While I won’t go into Retropie as there is a lot of content already out there – I thought I’d cover something a little newer. “Moonlight (formerly known as Limelight) is an open source implementation of NVIDIA’s GameStream protocol. We implemented the protocol used by the NVIDIA Shield and wrote a set of 3rd party clients.”

Pre-Requisites:

  • Pi 2 (the model I have, 3 will probably be fine but the 1 might struggle)
  • Debian Jesse based system, as I said I’m using Retropie (Although Arch is also supported)
  • An Nvidia card running Geforce Experience – mine is a GTX 960 so I’m benchmarking at that
  • A fairly stable network connection, my Pi and PC are both wired connection but WiFi may slow things down
  • SSH connection to your Pi and some basic command line knowledge

Installation

So first of all install the source-repo where we can download moonlight. Open up your apt sources list and add their deb archive.

$sudo nano /etc/apt/sources.list
deb http://archive.itimmer.nl/raspbian/moonlight jessie main

An example of mine is below

 

 

Now you need to add the GPG Key, this key verifies that all the packages you’re downloading have come from the same verified source – to prevent you installing rogue packages.

cd ~/
wget http://archive.itimmer.nl/itimmer.gpg
sudo apt-key add itimmer.gpg
apt-get update
apt-get install moonlight-embedded
rm itimmer.gpg

And that is the package installed.

Preparing for streaming
For this next section it’s probably a good idea to set up SSH on your host PC, you’ll need it to complete the setup process. First of all, you need to set up Nvidia Shield streaming. This is done through the GeForce settings menu, simply open up the GeForce screen, click the cog at the top right, select “Shield” from the menu and enable the GAMESTREAM option.

 

After that, ssh into your Pi and by running the following command you can try to connect to your PC

moonlight pair

This will scan the local network for nvidia shield servers, hopefully it will find your local machine and attempt to connect. When it does this it will offer a 4 digit pin which you must then enter on your host machine. Once it’s synced then steam big picture will likely launch, and in the lowest resolution possible. Just let it launch and then close it – stopping the stream. This should close moonlight running on your Pi too.

So the resolution is terrible, At this point it is worth familiarising yourself with moonlight by using man moonlight in which you will probably see that appending the flag -1080 or -720 will run the stream in 1080 and 720 respectively, and that the stream command will connect to a paired server. Therefore you can run your game stream using

moonlight stream -1080

This should launch big picture, on your pi, in 1080 resolution, which would be ideal – if not for the controller mapping. Most applications these days support controller mapping, and it’s a damned sight easier than it used to be, the only pain is mapping everything on each device. Thankfully moonlight supports this, simply use the following command to create a mapping configuration file.

moonlight map ~/Controller1.map
#Now just follow the steps on screen

Worth noting that the .map extension and the name of the file can be changed to whatever you want.

Once mapping is complete, move it to a more memorable location than on the home folder, I created a new folder in the RetroPie roms folder as I knew i’d be working out of there later.

mkdir ~/RetroPie/roms/moonlight/
mv Controller1.map ~/RetroPie/roms/moonlight/xpad-config

and now I can launch Big Picture, from my Pi, with my Xpad device mapped with the following command
moonlight stream -1080 -mapping /home/pi/RetroPie/roms/moonlight/xpad-config

Integrating into Retropie
Retropie’s menu structure is quite simple. There are a number of systems, those system on boot will scan a configured directory for files of a certain extension, those files when selected will execute a custom command in the shell. So in terms of our streaming platform, we just need to write a script to run the stream and have the system execute that. You could if you’re not a huge fan of having custom menus just stick it in the Ports System menu – but I’m getting ahead of myself.

As we already know the command to run our stream we just need to wrap that up in a bash script as below. Create a new file and enter the following nano ~/Retropie/roms/ports/moonlight.sh
#!/bin/bash
moonlight stream -1080 -mapping /home/pi/RetroPie/roms/moonlight/xpad-config

Save the file and make sure it has execute permissions chmod +x ~/Retropie/roms/ports/moonlight.sh and lo and behold if you navigate into your Ports folder on your retropie system you should see “MOONLIGHT” and executing it should start the game stream. You can stop here if you want. If you want to go further and create your own system menu for Emulationstation (like in the screenshot below) then keep reading.

First of all you need to create a custom systems config directory, this allows you to edit the systems XML without breaking the original. To do this simply back up the main systems config from the emulationstation install to your personal settings. You may find you’ve already done some of these steps for Kodi, so be careful not to overwrite your systems config if you’ve already done some manipulation in this area.

sudo cp /etc/emulationstation/es_systems.cfg ~/.emulationstation/es_systems.cfg

Then open it up with nano ~/.emulationstation/es_systems.cfg and scroll to the bottom. You need to add a new System node for moonlight, which will scan a certain directory for certain file types, the same as any other system in Emulationstation does. So you need to create a new <system> node at the end of the file, but before the </systemList> entry. My system entry looked like it does on the right

 

If you’d like more information on what each node means you can check the wiki in the sources of this article. The main focus are the and elements. When emaulationstation launches it will load in system “Moonlight” and scan the <path> directory for any files ending with extension .<extension> in this case it’s going to look in ~/RetroPie/roms/moonlight for any .sh files.

So all that’s left for us to do is create the directory and copy our run time script.

cp ~/Retropie/roms/ports/Moonlight.sh ~/RetroPie/roms/moonlight/Moonlight.sh

Now when you reboot or reload ES you should see a moonlight option in the systems menu, when loaded it contains the MOONLIGHT script, which provided the host PC you paired earlier is switched on will launch Steam Big Picture.

The only extra problem with this, is it kind of looks like crap…

This is because the theme settings for this don’t exist, and the default theme is the text value of the system, with plain white everything. It’s a little bit blinding so lets move onto the next section.

Creating a theme for Moonlight
This part took me the longest – mainly because I have frack all design skill and the folder structure was a bit funky – but essentially similar to the systems config we copied, the theme is a setup of vectors and XML documents so there’s not harm in cloning your current setup and making a ‘custom’ copy.

My current skin is “Eudora” so make sure when you start work you’re using the right folder to match your skin, as I spent a good 20 minutes editing Carbon to no avail. You can check which skin you use by checking the UI settings in your RetroPie menu.

So first of all copy your current skin and it’s contents to a working folder, then rename it so that EmulationStation can differentiate between the skins.

sudo cp -r /etc/emulationstation/themes/eudora /opt/retropie/configs/all/emulationstation/themes/eudora-custom

Then navigate inside and create a new folder for our theme, it has to match the value of the <theme> element we specified in the System config, I found it a lot easier to just clone an existing theme and replace the artwork, as everything is just based off vectors anyway…

cd /opt/retropie/configs/all/emulationstation/themes/eudora-custom
cp -r ./kodi ./moonlight
cd ./moonlight

Now inside the moonlight theme folder I had 3 files (I know in carbon there is an additional Art folder so just bear in mind that the folder structure of a skin differs) 1 was the theme xml doc, one was the graphic for the “controller” i.e the transparent object which appears above the central bar in the menu, and one was the logo, which appears on the central bar in the emulationstation menu.

So all I had to do was replace controller.svg and logo.svg. Annoyingly but understandably, these are vector images. You can’t just whack them into Photoshop and expect it to work. I recommend Inkscape if you’re after something free and quick, but if you want to do some proper work on the subject I think you need to head down the Adobe illustrator route. I was just after a quick job so inkscape did me fine. I found a steam SVG icon online and uploaded that for the “controller”, then I opened the same image in Inkscape and added the text “Moonlight” I did try playing around with some effects, but like I said, I’m not a designer. It’s a little lacking but it works, and it got rid of the brilliant white and I know what it is now.

I uploaded both images via sftp into the target folder, then through the Emulationstation menu changed my skin from Eudora to Eudora-Custom, rebooted and that was it. Custom menu entry for a steam link running on my Raspberry Pi.

Sources:

Moonlight Website
Moonlight WiKi
Add a new system in Emulationstation

Setting up Raspbian Virtual Desktop with VNC, SSH and Samba

A few months ago while having some issues updating Openelec on my RPi I decided to blitz the SD card and go FULL Openelec with a fresh install of version 6. However this had some problems, namely that it no longer functioned as a file server – or rather it could, but the lack of options to configure ports, passwords and other configs generally make file hosting through Openelec a huge security risk, and even if you hack your way around the read-only filesystem, it just gets overwritten when it next updates.

I decided to buy another Raspberry Pi. With the model 3 out now, model 2’s became a lot cheaper. I picked up one here for £25. I decided early on it was just going to be a fileserver, so after it was set up I wasn’t going to need a keyboard/Mouse/HDMI plugged in. Just power, hard drive and network – with the aim to control everything via VNC and SSH.

The Pi arrived, and I installed Raspbian on the SD card and plugged it in, I’d gone for a direct install of Raspbian so didn’t get the setup I had seen before when using NOOBS, however you can trigger this yourself from the terminal. So open up the terminal and type

sudo raspi-config

Then you can go through the setup process. After I had set that stuff up the way I wanted I had the following to-do list:

  • Configure SSH to be more secure.
  • Mount my external HDD locally.
  • Install samba and configure it to share the hard drive over the network
  • Install VNC and set it to launch on startup as a virtual desktop
Configure SSH to be more secure

SSH can be brute forced, that shouldn’t be a surprise. Raspbian by default will give you a default username and password. If you haven’t changed it already, change your password, you can do this by opening up the terminal and entering the command passwd
I also find that you can avoid most bot attacks if you simply change the default port that SSH runs on, so I recommend opening up the config file located at /etc/ssh/sshd_config and changing the value for Port, the default is 22 – change it to something memorable, then either restart the Pi or use sudo service sshd restart

You should now be able to access your raspberry pi from any given SSH client with ssh pi@192.168.0.X -p 1234 Where X is the IP address and 1234 is the secret port you set up.

Mount my external HDD locally

Linux can pretty much mount anything (phrasing) it will prefer stuff formatted for EXT3/4 but can also read from NTFS, FAT and HFS file systems. When you plug in a hard drive it will by default mount itself in the folder /media/. I wanted to mount my hard drive in another location just for the sake of ease. To do this you need to create a folder as a mount point and then configure the fstab to mount the device in that location.

So first of all, figure out what your device is called, to do this plug the device in, open up the terminal and use the command sudo blkid if your device has a name (My device is called “JOES-SLAB”) then you should be able to pick it out of the list of results.

~$ sudo blkid
/dev/sda1: LABEL="JOES-SLAB" UUID="82AA5C66AA5C58AD" TYPE="ntfs" PARTUUID="7dd7743f-01"

So I can see that my external NTFS HDD is mounted on /dev/sda1. Copy and paste the UUID as you will need that later. At this point you should unmount the drive, leave it plugged in but use the command

umount /dev/sda1

Where /dev/sda1 is the mount point of your device. If it successfully unmounts then you need to open up and edit /etc/fstab This file will be root protected so you’ll need to open with elevated permissions.
The fstab is a table which gives the system information on how to mount certain devices. It may already have information in it, this is fine, just got to the bottom of the file and start a new row. The fstab takes 6 fields

  • Device
  • Mount Point
  • Filesystem
  • Options
  • Dump
  • Pass

So, using the UUID you recorded earlier you can add your device to the fstab. An example of mine is below, then an explantion of what each field means.

UUID=82AA5C66AA5C58AD   /home/pi/Documents      ntfs-3g         uid=1000,gid=1000,umask=022     0       0

Device: UUID=82AA5C66AA5C58AD – Each storage device will have a UUID, this lets the operating system know how to mount this device
Mount Point: /home/pi/Documents – When I open this folder the contents of the hard drive will be displayed. This file path needs to exist in order for this to work so you need to create this directory if its not already there.
Filesystem: ntfs-3g – my hard drive uses NTFS, the 3g options is an external package which helps with the read/write stuff. In all fairness I’m not all that clued up, this is just what works. If your hard drive is FAT or HFS(+) have a google as to what you need to put in here.
Options: uid=1000,gid=1000,umask=022 – NTFS hard drives don’t let you change read/write/execute permissions after they’re mounted so you need to set them up here. This setup will give you ownership and set the hard drive permissions to 755, so only user pi has the ability to write to the HDD but others can read and execute.
Dump & Pass: 0 – No idea, that’s just what you need to put

Save the fstab, then in the terminal run sudo mount -a this will reload everything in the fstab. If everything went to plan, you should find the contents of your HDD listed under /home/pi/Documents (or whatever mount point you specified)

Install samba and configure it to share the hard drive over the network

Samba is a package in unix/linux that allows for file and printer sharing across a local area network, I used this mainly so I could map the hard drive on my raspberry pi on other devices, like my windows laptop and PC.

Samba is probably already installed, but run sudo apt-get install samba just in case. Once installed you can find the main configuration file at /etc/samba/smb.conf open that up with elevated permissions. The file will already contain a lot of configurations, some of them are contained within a parent configuration which is denoted by the [] brackets. For instance at the top level is [global].

Under [homes] I added my own custom configuration, as before I’ll show the code then explain each option.

[Public Documents]
   comment = Public Documents
   path = /home/pi/Documents
   guest ok = no
   browseable = yes
   read only = no
   create mask = 0755
   directory mask = 0755

[Public Documents] – this lets samba know that its reading a new configuration section, this is also how the folder will appear when viewing from another device, for example if I logged into my pi from my PC I would see \\raspberrypi\Public Documents
Comment: A description of the drive, usually I give the same value as above
path – this is the local path to the location you wish to share across the network (it matches our HDD mount point)
guest ok – No, I want people to log in before they access my hard drive! If you’re not bothered you can set this to ‘yes’
browseable – Yes, I want to be able to browse the drive
read only – No, I want to be able to write to the drive
create/directory mask – The permissions applied to the network drive

Save and quit. Now since I specified that guests were not okay, the final thing to do is to set up your user as a valid login. To do this, in the console use the command smbpasswd pi. This will allow you to set a password with samba for your login, you are allowed to set the same password as when you were configuring SSH, which means when accessing the drive remotely you can login with your usual username and password. You can always specify a different user than “pi” but I’m not sure how that would conflict with the fstab permissions on the drive. Also remember that in future if you change your password to update both.

Restart samba with the command sudo /etc/init.d/samba restart after which you should be able to access your hard drive from another device across the network. On windows I can map the drive using the map network drive wizard, and connecting using different credentials, where I enter the Pi’s username and password. Sometimes you may need to specify the domain of the user account, as windows will sometimes try to log you in as a local user rather than a remote one (i.e “WindowsPc\pi” rather than “raspberrypi\pi”) So just be sure to specify the domain name of your pi, if you don’t know what this is use the command hostname on your pi and it should tell you. As in the screenshots below.

Map Network 1

If all has gone according to plan, your remote drive should show under the z:\ on your windows device (or phone, or tablet)

Map Network 2Map Network 3

Can I access my files from outside my local network?

Yes you can actually, not through Samba though. If you’ve configured SSH properly you should be able to port forward the secret port you specified to your device externally, for this you’ll need to give your device a static IP on your network and set up the port forward rule – these are usually done through your router/gateway.

Once you have port forwarding setup you can actually use SFTP to access your files. SFTP is like FTP but uses an SSH connection to secure and tunnel the connection so you don’t need to install anything new. With any file transfer program that can handle SFTP you can connect using sftp pi@X.X.X.X -p 1234 again where X.X.X.X is your public external IP address and 1234 is the secret SSH port you set up.Screenshot_20160426-133354

Linux has SFTP support built in, for windows devices and android I recommend

 

Install VNC and set it to launch on startup as a virtual desktop

The final step is to install and setup VNC, VNC is a server application which will allow you to connect to a device and view the desktop and optionally use the keyboard and mouse as a means of remote desktop control. It’s old tech but I prefer that as it can be used with almost any device, and you can still secure it (although the method we’re going to use only uses 8 character passwords for some god awful reason)

In my experience Raspbian has 2 main VNC packages, Tight-VNC and Vino-VNC. Vino works by allowing the VNC server to assume direct control of XSession in progress, if you had a monitor connected you would see the mouse cursor moving, opening files and typing. Tight-VNC creates virtual XSessions which are identical to your normal desktop, but you don’t need a monitor and even if you did have one plugged in, the session you control through VNC would not be the same.

I opted for TightVNC in this case, as I did not need my RPi to be plugged into a monitor, keyboard or mouse after it was setup. Install TightVNC by opening the terminal and typing sudo apt-get install tightvncserver once installed you can run the application, on first run you will be asked to set up a password. As a mentioned earlier you can only set up an 8 character password, you can enter more text – so if you can only remember a longer password you can still type it in, the program will just truncate it. Run TightVNC with the command vncserver You will be asked to enter a password and then verify, then you’ll be offered to setup a view-only password. This you can give out to people so they can view your desktop but have no control.

Once the server is running you will likely see a message like New 'X' desktop is running on raspberrypi:1. This means your VNC server is now running on virtual desktop 1. The virtual desktop ports correspond to actual TCP ports too. By default VNC servers start at port 5900, so when TightVNC says it’s running on raspberrypi:1 what it means is “Locally I’m running on desktop 1, externally I’m running on port 5901”. TightVNC cannot run on port :0 as this is the REAL desktop environment, so when you just run vnc without any extra parameters it will default to run on port 5901. If you have a VNC client on another machine on your local are network you can connect to your VNC server with X.X.X.X:5901, and entering the password you created in the setup. If you want to run on a different virtual desktop port, just remember that any other devices on your network will need to access it via the TCP port 5900 + the value you set as the virtual desktop, I’ll explain more on that as we go.

End the VNC server session with the command vncserver -kill :1. This will terminate any VNC servers running on virtual desktop 1. Since we want our VNC server to start on system load, and we want to customize the setup the easiest thing to do is figure out what options we want to run the server with, and then automatically run that setup on system startup. You have a few options but I only want to set resolution and colour depth. So, create your new script and enter the following. I’ll explain what it does after

vncserver :101 -geometry 1366x768 -depth 24

:101 this is the virtual desktop, this directly affects the TCP port which your VNC server runs on, so my server would run on port 6001 (starting at 5900 + 101), again changing the port is a good way to avoid most bot attacks.
-depth 24 Sets the colour depth, if you have a particularly slow connection you can
-geometry 1366×768 this sets the resolution of your virtual desktop. It can be as big or small as you like, I set it to this because it will fit onto the smallest device in my house without much scaling, but you can set to anything you see fit.

Now you have a command which will run the VNC server, on the port and resolution you want, you just need to make this command run automatically. I found that sometimes the server booted up before the desktop environment. You’ve got free reign here to chose your own method, but I found the one below works best – originally from this post

Open up a new text file and enter the following text

[Unit]
Description=TightVNC remote desktop server
After=sshd.service

[Service]
Type=dbus
ExecStart=/usr/bin/tightvncserver :101 -geometry 1366x768 -depth 24
User=pi
Type=forking

[Install]
WantedBy=multi-user.target

Save it under the directory /etc/systemd/system/tightvncserver.service The only thing you may need to alter is the line starting ExecStart, where you can see our command to start the server. You may notice that vncserver has been replaced by /usr/bin/tightvncserver don’t worry about this too much as its essentially the same command, you’re just providing an absolute path to the binaries to execute it, you can also alter the user running the service if you wish.

Once the file is in place you need to make it owned by the root user and then install it as a system service. Open terminal and run the following commands.

#Configure the script to run at startup
sudo chown root:root /etc/systemd/system/tightvncserver.service
sudo chmod 755 /etc/systemd/system/tightvncserver.service
sudo systemctl enable tightvncserver.service

If you want to test that your script is running, run the following command and you should see either the vnc startup message, or an error if its already running.

#To test the script is working
sudo systemctl start tightvncserver.service

To test it’s working, restart your device and see if you can connect to the VNC Server.

Once it’s working the way you want, you can disconnect your keyboard, mouse and monitor as your Raspberry Pi is now running as a virtual environment.

To connect to your VNC server I recommend the following clients (sorry I don’t have any Mac devices)

Can I access my VNC server outside of my network?
Yes, you can, but you shouldn’t, having your VNC exposed makes you vulnerable to a few attacks, and once they break your VNC server they can actually control your device without needing your password so it’s not wise to simply port forward your VNC port. I recommend using SSH tunnelling, which the android app I suggested will support. I discussed this in more depth in one of my previous articles here. It depends on how secure you want things to be, but yes, there’s no reason why you can’t remote control your device over the internet.

A strange bug with VNC having no taskbar
When I was connecting to my desktop session, I found that all I saw was my desktop wallpaper. This was because the Xsession desktop environment was not actually being loaded in the virtual desktop. You home directory contains a hidden folder called /home/pi/.config/lxpanel/ in this folder you have the potential to have a copy of the LXPanel config files, you just need to copy them from the system directory.

cp -r /etc/xdg/lxpanel/profile/LXDE /home/pi/.config/lxpanel
cp -r /etc/xdg/lxpanel/profile/LXDE-pi /home/pi/.config/lxpanel    #This command might fail if you're on an older RPi model

This should fix the LXPanel issues on your virtual desktop and you should be able to use VNC to controlVnc View1 your virtual desktop now.

« Older posts

© 2025 Joe van de Bilt

Theme by Anders NorenUp ↑