Quality code and lots of coffee

Author: joe.vandebilt (Page 2 of 3)

SQL

Databases play a vital role in any backend system; from the early stages of my A levels, throughout my degree and now in my current employment I have been using SQL based systyems; and through both professional and personal interests my SQL knowledge has grown from simple web interfaces, to managing entire SQL databases in various formats, including Microsoft SQL Server, MySQL and Oracle.

I have worked with SQL Server  in a professional capacity since 2013 when I first started my professional development career. Working for a company with a tech stack ranging from 2005 to present I was became familiar with a range of SQL Server products from SQL Enterprise 2000, SQL Profiler and SQL Server 2005 – 2016. Working with Tables, Views, Stored Procedures and functions – as well as more meta datbase objects such as logins, schemas and SQL Agent Jobs. Working with large quantities of data at a time I am proficient in indexing, query optimisation and common table expressions. As well as tracking SQL through source control in DDL formats, as well as Microsoft’s inbuilt SSDT.

MySQL has been used in both my personal and freelance work – as an open source and free option it is a very powerful and cost effective database platform. Used in both Command Line and GUI capacities I have set up multiple databases, tables, functions, stored procedures and users – even the database this blog is operating on.

My degree focused on Oracle 11g, creating tables and stored procedures as well as focusing on relational algebra.

PHP

PHP is a free server side scripting language. I started to teach myself PHP in 2010 when I started University as a means to take on some paid work during my studies. I quickly learnt the basics and after a short while was able to build my own code libraries for sites, allowing them to interface with MySQL databases, send mail and manage login systems. PHP has been vital part of developing websites and used a PHP based system in the final year of my BSc to deliver a system to a client, despite it not being taught on the course.

Hosting Webmail for Multiple Domains

The hosting business I run on the side to my actual job has the odd client wanting their own mailing system. I set up a postfix/dovecot system on our VM not too long ago and it’s been slowly building up into an actual workable system. We have had some issues regarding a PHP exploit and some spam but that’s an ongoing pain in the arse and another story altogether.

One of the obvious problems with hosting your own mail server is; how do people actually see their mail? We offer IMAP and SMTP support so they can plug in their own clients but what if they want an actual interface? Well there are a number of pre-made solutions, which are just websites which hook into your internal (or remote) mailing system. We decided to go with Roundcube, comes with its own online installer, needs MySQL and some extra PHP functions but it’s well documented and the install process is fairly easy – we set up roundcube for 3 systems, including myself.

The problem came when I tried to set up roundcube for our 4th client, I realised that fundamentally there was no difference in terms of the backend, despite using their own URLs they all resolve to the same location and the site itself was identical, save for 1 or two configs. They were also all using individual databases (belonging to their sites) and it dawned on me that I had the same table setups across 4 databases for no particular reason…

I decided to downsize and force everybody to use the same client. I started with a new setup of Roundcube and made backups of the data (not the schema/table structure) from every clients individual database and compiled into one script. The user ID’s need to be manually amended (obviously each domains users started from 1) but thankfully in total we had about 11 users so it wasn’t difficult to go down the list and change the ID’s.

Since we use virtualhosts on our sites, and I wanted to enable the mail portal for every site, the easiest thing to do was create an Alias file. An Alias will redirect any traffic for a particular argument to a specified destination. For example, I wanted every site with a /Mail url to navigate to our the new Roundcube installation. So I created the following file

nano /etc/httpd/conf.d/Mail.conf
$

Alias /newmail /var/www/Mail
Alias /Mail /var/www/Mail
Alias /mail /var/www/Mail
#EOF

service httpd restart

This enabled any site we host to be redirected to the roundcube installation whenever they used any of the alias’ specified. for example http://mycustomdomain.com/Mail

Then it was just a matter of making the configs unique. The easiest way to do this that I’ve found is to use PHP’s $_SERVER attribute to detect basic information like which URL has been used to access the site, then to use this information to assign values in the config. You can set Roundcube to pass in the Username and Password from the logged in user for SMTP auth when they try to send, and by default they try to login via IMAP so users are properly authenticated. As I said before you can use PHP to detect the URL which is being used to access the page, so in the config file itself you can just set a number of parameters into an array, then load these in while the config is being read. As in the code below taken from my config.inc.php file in the Roundcube Config directory

//Now some fancy scripting to set the logo based on the domain viewing the page
$CustomerArray= array();
$CustomerArray[] = array("domain" => "vandebilt.co", "logo" => "http://vandebilt.co/Archive/images/joelogowhite.png");
$CustomerArray[] = array("domain" => "anotherdomain.co.uk", "logo" => "http://anotherdomain.co.uk/logo-250.png");

for ($i = 0; $i < COUNT($CustomerArray); $i++)
{
    //If the URL being used to view the site contains the domain name
    if (strpos($_SERVER["SERVER_NAME"],$CustomerArray[$i]["domain"]) !== false)
    {
        $config['skin_logo'] = $CustomerArray[$i]["logo"];
        $config['product_name'] = $CustomerArray[$i]["domain"] . ' Webmail';
    }
}

You can expand the array to hold as many config items as you like per domain, then during the loop load them in as the site parameters. So if I were to access the site vandebilt.co/Mail I would see my own logo, then if I were to access anotherdomain.co.uk/Mail, I would see their logo. This gives the users the impression that they have their own webmail portal, but it is in fact just the same site with some very basic config hacks to make it seem like their own. This lets us downsize on server space, share plugin configurations, only need to debug one site and ultimately support it better.

Roundcube Webmail Client

Unix Command Line Cloud Storage

When I originally set up my Minecraft server some 4 years ago I designed a script to automatically backup the world, plugins and database entries to a Dropbox folder, the script would run in the middle of the night and email me with the output, such is the beauty of cron. The Dropbox daemon running in the background would pick up the new files and sync them online. A simple solution.

As time went on the script became more complex to handle certain issues I had – making sure before we backup the files the previous were deleted, and when they were deleted we wait for dropbox to finish syncing before shoving the new ones in its place. That tended to avoid most data conflicts I experienced.

Eventually as time went on and we moved away from Minecraft (although still running it) we started hosting websites for ourselves, small projects we work on and even some other people. It became sensible to extend the script to backup websites, mail directories and server configurations, in the event of a system collapse. Dropbox, despite its many features, didn’t provide enough space, I’d managed to accrue 3.5gb of free space through the various bonus’ they have but it was no longer enough. On top of this our Minecraft server runs Centos 5 – which although still supported by RedHat until 2017 is old, after a recent format of the MC server I tried to reinstall Dropbox only to find that Dropbox could no longer be run, and even if I downgraded there was no way to connect the server to my account due to the version difference. After asking on the Dropbox community if there were any plans to go back to support RHEL5 it was a begrudging no.

Alternatives are available, due to a bonus I received with my phone my Google Drive has over 100Gb of space, but no command line (nothing official or native at least) I had a look around at some of the other Cloud Solutions and found Copy.

While not seeming very elaborate or exciting (as exciting as cloud storage can get) it was supported on Android, iOS, Windows and Linux – as well as providing 15GB for a basic account . This would easily cover my needs.

Unfortunately, Copy also didn’t provide support for RHEL5, so as it happens my MC server is still without a proper Daemon running. However I’ve worked around it by using an SCP script to just shove everything onto my newer, fancier, RHEL6 box.

The Copy daemon can be downloaded from their site in a .tar.gz – uncompress it and stick it somewhere where you normally stick programs. For me it was /etc/copy/

wget https://copy.com/install/linux/Copy.tgz --no-check-certificate
tar zxvf ./Copy.tgz
mv ./copy /etc/
cd /etc/copy/x86_64

If you’re running purely in command line the only thing you need to run is CopyConsole, which can be found in either the x86 or x86_64 folders. Initially to set it up you need to provide your username, password and the directory you wish to set as the directory to sync.

mkdir /root/Copy
./CopyConsole -u=myemail@domain.com -p="my password with spaces" -r=/root/Copy

This should then connect to your account and try to sync. Try adding some files through the web interface and seeing if you notice them downloading. Obviously running the command in the foreground you’re stuck watching the console, so run it in a screen. Once you’re ran the Console app with the required arguments it will have written a config in your home directory, so you don’t need to pass them again and always have them visible in your processes.

screen -dmS CopyDaemon /etc/copy/x86_64/CopyConsole -r=/root/Copy
screen -x CopyDaemon
Ctrl+A+D to detach from screen

That will let your app run happily in the background, and anything you put into /root/Copy will be synced. One other thing to do would be to check that the daemon is running when you do your backup job – I’m not sure how reliable this service is yet.

echo "Checking Copy Daemon status..."
SERVICE='CopyConsole'
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running"
echo ""
else
echo "$SERVICE is not running, Starting now"
echo ""
screen -dmS CopyDaemon /etc/copy/x86_64/CopyConsole -r=/root/Copy/
sleep 10
fi

The only downsides to Copy over Dropbox is that I find the sync speeds much slower, there is also no Status interface, so I can’t quite figure out to automate checking if Copy is done Syncing, however it seems to be a bit lighter on the processor (much more so than Google Drive) so all in all seems a worthwhile investment until Dropbox offers up more support or Google Drive goes native.

Sources:

  1. Dropbox
  2. Copy
  3. Checking to see if a service is running in a shell script

Integrating SoapUI tests with TeamCity

I recently set up a CI server for a new project at work, as the senior staff were quite keen on implimenting the tech I had recently pioneered on another project. After configuring the build, including automated nUnit tests, database, web and service deployments I was asked if I could integrate automated testing using SoapUI – something the test team had been pioneering for this project too.

SoapUI being a very lightweight, very powerful tool for testing Soap requests against a web service, such as an API or end-point. You can build a fairly expansive library of requests, configure the expected results and set pass/fail criteria based on those returning messages. So as a tool to test our new API, it was an obvious choice.

The SoapUI project had been created in its own SVN repository, so in TeamCity it was quite simple to create a SoapUI build configuration on its own, rather than stick it on the end of another build (although you can easily set the SoapUI tests to be triggered on success)

I won’t get into the whole TeamCity configuration because it is vast I’ll just stick to the SoapUI stuff. Before we begin, housekeeping so you know what I’m talking about:

  • OS: Windows Server 2012 R2
  • User Permissions: Administrator (Read, Write & Execute)
  • TeamCity: 9.1.4 (build 37293)
  • SoapUI: 5.2.1 (OpenSource)

I decided to install the full SoapUI package on the CI server, if you’re strapped for space I would imagine there is a way to use the testrunner as a standalone app – but the full package is only 250mb, so it wasn’t a huge impact.

In the installation directory “<install path>\bin\” there is a file named TestRunner.bat and TestRunner.sh if you’re a unix user. This batch file can be fed a multitude of arguments to be able to run your SoapUI project and generate results files. This is important since TeamCity cannot explicitly get the results while the batch is running (other than failure/success exit codes).

The TestRunner can be run directly through a “Command Line” build step in TeamCity, however I felt I needed more steps in order to organise and clear out space on the drive so I wrote my own Batch file which would do a little housekeeping, then run the TestRunner. Some people used MSBuild for this task – though I felt there wasn’t quite enough info for me to approach it that way, so I’ve gone old school and wrote a windows batch as follows.

@echo off
E:
IF EXIST E:\SmartBear\SoapReports\Results (
::Delete the output directory
echo "Reporting ouptut directory exists, deleting it"
RMDIR E:\SmartBear\SoapReports\Results /S /Q
)
mkdir E:\SmartBear\SoapReports\Results
E:\SmartBear\SoapUI-5.2.1\bin\testrunner.bat -rajI "E:\TeamCity\buildAgent\work\SWIM_TEST\SWIM-ware-soapui-project.xml" -f "E:\SmartBear\SoapReports\Results"

The idea being that the batch creates an output directory for the results, clears out any old files, then runs our TestRunner. The TestRunner as I said before takes a multitude of arguments for a variety of things. In this case I’ve used 5

  • -r: Turns on printing of a small summary report
  • -a: Turns on exporting of all test results, not only errors
  • -j: Turns on exporting of JUnit-compatible reports, see below
  • -I: Do not stop if error occurs, ignore them: Execution does not stop if error occurs, but no detailed information about errors are stored to the log.
  • -f: Specifies the root folder to which test results should be exported

The -I argument was an important as when the TestRunner does it’s thing, if any of the tests fail TeamCity will pick up on the errors and mark the entire job as failed – this is obviously not the point in testing – we want detailed errors on the failures, not just a failed job – the I flag will remove some printing to the console but this can be pulled out later on from the test results.

After the -rajI command is the path to the SoapUI project, and after the -f argument is the path to where I wanted my results to go.

When I first ran the Job I encountered an issue where the build would not proceed beyond the “All Plugins Loaded” step – I searched the internet not finding much beyond others experiencing the same. I discovered eventually that the TestRunner was not capable of generating a soapui-settings.xml file and as such was asking me every time I ran the TestRunner if I wanted to send my usage data to SmartBear, while generally unrelated it was waiting for me to select an option, and blocking the rest of the job. The solution of which was to generate a settings file, which I did by launching the UI once. If you aren’t running the UI, I uploaded my sample config in a forum post here.

SoapUISettingsI also had a few issues in my error logs relating to the max_connections_per_host value as it was set to an empty string and Soap was trying to parse it as an integer. I had to manually update this setting to a numeric value. This can be done either through the front end or by manually editing your settings xml (Key HttpSettings@max_connections_per_host)

 

Edit through the front end using File > Preferences and under the HTTP Settings you can find the options.

So I now had my TeamCity Build config downloading the SoapUI from the server, and running the test cases – and I could see the output. Unfortunately all it was accomplishing was telling me it had successfully run – no actual test results. I was following a tutorial but it got a little vague about now on how to proceed.

When the TestRunner completes it generates a number of .txt files – these summarise the message and response from the server, as well as some header information and the rest of it. TestRunner also generates a summary .xml file, however this is just a top level summary of the execution. SO in order to get some proper reporting done you need to do 2 things. First, read in your XML results file(s), Second is to publish the failed tests as artefacts to provide greater detail on the failures.

For the first part – in your build configuration go to “Build Features” add a new XML report processing of report type “Ant JUnit” – since this is the type that SoapUI exports (due to the -j argument). You need to then add a monitoring rule to point TeamCity to the path where the report summary is kept. For me this was the followingCapture2

+:E:\SmartBear\SoapReports\Results\*.xml

Adding this Build Feature will allow the after-testing report to be read into TeamCity, showing you the number of failed and Passed Tests. The only issue is that when viewing the stacktrace the information isn’t great – at best you will see the failed assertion from the test case. To improve on this we can read in the individual test outcomes from the reports folder. This is mentioned here, but it is rather vaguely worded so it took me a while to figure out.

Under General Settings in your build configuration you can add files to be captured as artefacts – these will be saved along with the build run so the physical files can be deleted, and the legacy artefacts can be removed depending on your TeamCity Configuration: To capture the artefacts add the path with a wildcard for *FAILED.txt – this is because every failed test in SoapUI produces an outcome file which ends with that string, so you can capture the details of the failed tests relatively easily.

Artefact Paths

Now when you run your SoapUI build, the tests will run, the top down failures and passes will be displayed and you can check your artefacts for detailed errors, including what message was sent and what message was received back. Very handy.

Sources:

  1. SoapUI – Running Functional Tests
  2. Running SoapUI Tests in Teamcity
  3. Integration with TeamCity – TeamCity waiting on a condition.
  4. Setting your preferences in SoapUI

 

Vervepower

verve-power-250I was approached as a technical consult to oversee the migration of web and email services from the previous hosts of this website to our own servers. The migration involved the importing of the user mailboxes, email clients, web platform and databases. The migration required me to set up IMAP and SMTP protocols for the users to be able to access their mail, as well as ensuring the domain name was imported correctly and that digital signing and security was kept intact.

Metro Excuses

metroMetro Excuses was a JavaScript webpage I designed to tech myself some of the fundamentals of the language.

The project takes a satirical look at some of the excuses that Tyne and Wear Metro have given commuters in the past and allows users to create some of their own.

The basis of the system is that there are 3 categories to an excuse, a degree of failure, an affected part or piece of the system and an attribute of that part or piece that has failed. Hard coding generic values into arrays meant that users were able to create combinations as well as using a randomise function to create random combinations.

Legend of Drongo

DrongoThe Legend of Drongo was a project I started as a means to familiarise myself with C# programming concepts for my studies.

Following my studies the project was mothballed and is now being worked on in sprints as and when I have the time. As a project it is a product of the accumulated skills I have gained over the past 4 years of development experience, so naturally some areas of the program are poorly designed and executed.

However I am constantly making changes to the engine, and one day hope to release as an indie game.

The bulk of the project is a game engine in which users can traverse a ‘world’ comprised of interlocking data types, these data types can be configured in an accompanying world editor which incorporates windows forms to allows users to create and edit custom worlds to be played through the game engine.

Diamond Dust Server

ddslogo The Diamond Dust Server was a small gaming community of which I was a part part of. As the technical lead I was tasked with maintaining a virtual server which hosted popular sandbox game ‘Minecraft’.

Following the success and ever increasing traffic to the server, I helped to create and design a website for the community built in PHP and hooking into a MySQL database back-end.

The website allowed for users of the server to create accounts and link to their in-game profiles which also relied on the MySQL database. Through the shared data I was able to create a platform in which users could log in online to view stats, progress and chat with other players without actually playing the game, the aim of which was to expand the community and create a space for advertising revenue.

« Older posts Newer posts »

© 2025 Joe van de Bilt

Theme by Anders NorenUp ↑