I maintain an open source app called Simple Markdown. One of the things I’ve been working on for it lately is getting some CI server setup to be able to automate the testing and packaging of the app. I’ll still handle the publishing myself but I’d like to speed up the rest of the process. Anyways, given that the code is all freely available, it makes it tough to work with a CI server because I can’t store any secrets in the code. It’s generally bad practice to do so anyways but when you have a private repo and you’re the only developer then it’s usually fine. I’ve got my gradle scripts configured to automatically sign the release variants with my signing key, so that was pretty straightforward to mock out in the absence of the actual signing configuration, but I have been unable to find any way to mock out my google-services.json file, which contains the API keys and URLs for my Firebase project, which I’m currently using for error reports and analytics. As such, I began looking into the security concerns for just including it in the repo. I ran into this StackOverflow post which points to this other StackOverflow post and a Google Groups discussion, all of which seem to come to the same conclusion: the google-services.json file is not really secret, as it could easily be retrieved from your compiled APK. Seeing as it’s written on the internet, it must be true.

Just kidding, let’s investigate.

I’ll start out by downloading the Play Store version of my app onto a device. I use Proguard to obfuscate it so if I can find the google-services.json file then it’s not encrypted in any way and it’s probably just fine to publish it in my repo. Let’s grab that APK:

# Get the path to the APK:
.\adb.exe shell pm list packages -f com.wbrawner.simplemarkdown
# Pull the APK file
.\adb.exe pull /data/app/com.wbrawner.simplemarkdown-dAHkKgk5P96rLBONU3zMNw==/base.apk simplemarkdown.apk

Android Studio has a helpful way to unpack the APK and examine its contents. Just press Ctrl + Shift + A (that’s Cmd + Shift + A for those of the Apple orientation) and type “Analyze APK”

Navigate to wherever you pulled the APK file to and open it. Clearly, there’s no google-services.json file in here, so case closed, right? It’s not safe to publish it. Well, hang on a second. Let’s check the AndroidManifest.xml file… Nope, not there either. How about this resources.arsc thing? Maybe it’s in the string resources. As we scroll down a bit, a couple of values jump out at me. firebase_database_url: https://simplemarkdown.firebaseio.com. That matches exactly the firebase_url parameter in the google-services.jsongcm_defaultSenderId318641233555. There’s our project_numbergoogle_api_keyAIzaSyBDMcXg-10NsXLDKJRtj5WnXoHrwg3m9Os. That’s a one-to-one match for the current_key value inside api_key.

The list goes on. Given that you can, in fact, find all of the same values in the google-services.json file in the string resources, I guess that it really is safe to publish after all. If you’re a lot smarter than I am and you see that I’ve made a terrible mistake somewhere, please do let me know. In the meantime, I’ll simplify my CI setup by just committing the file.

In my previous post where I outlined the goals for this project of mine, I briefly mentioned my thoughts on how to structure the code in order for it to work on as many platforms as possible. In this post, I intend to dive in a little deeper on the architecture of the app to maximize reusability without causing myself too many headaches on whichever native platform I’m writing for.

I’m an Android developer by day, and when structuring a native app written in Java or Kotlin, I usually like to break them down in the following layers:

  • UI layer
  • Repository layer
  • Service layer

I intend to mirror this structure for my cross-platform apps as well. My hope is that I’ll be able to easily use UI-specifc objects that are unrelated to the data transfer objects in C, so that I can load up the UI and then only communicate with the native layer where necessary, as I could be wrong but I’m fairly certain that accessing C code from Java is fairly expensive.

The UI layer

On Android, I generally consider the UI layer to be the Activities, Fragments, and ViewModels/Presenters. For anyone not familiar with Android development, that’s essentially all of the components related to outputting information to the user, or receiving input from the user. I like to keep this layer very simple, having little-to-no logic beyond just listening for updates from the lower layers or providing updates from the user. I’ll use ViewModels only as a means to avoid losing state between screen rotations but this will otherwise contain no logic. This is also ideally going to be the only layer that contains platform-specific code, so that would be Kotlin for Android, Swift for iOS, GTK/QT for Linux, etc.

The Repository Layer

The way I see it, the Repository layer is meant to be an abstraction around the various data sources for your app. In most cases this is fairly simple, either just a local database or just a network resource, but I intend to implement some local caching so that the app can still be usable offline. For retrieving data, the app should store a copy locally so that if the user opens the app again without an internet connection, they can still see the last updates. Even if the user has an internet connection, this ensures that something will be visible upon launching the app as a local cache will inevitably be faster than making a network request. I intend to write this layer in C, with the use of cURL and OpenSSL for making secure network requests, and SQLite for caching the data locally. Given that these are all open source cross-platform libraries that, as far as I know, run on multiple architectures including ARM and x86, they should be sufficient for my needs. I’ll likely only need platform-specific code for the object mapping from the C structs to the objects in the platform’s language.

The Service Layer

While the repository abstracts away where the data comes from, the service layer abstracts access to each data source individually. So for my purposes, the repository object would have a reference to a database service object that’s responsible for managing the connections to the database and retrieving or updating the data it holds, and a network service object, which would have a similar role as the database service object except it would handle network resources. I also intend to write this layer in C for use across multiple platforms.


For the next post, I’ll likely write a quick tutorial on how to include C code in your Android app, as it’s something I’m only vaguely familiar with myself, so that will give me a little hands-on experience with the process. From there on, I’ll document the process of actually making this work. Stay tuned for further updates and don’t hesitate to get in touch if you have any questions or suggestions!

Nowadays, there are many different ways to build a cross-platform app, including React Native, Xamarin, and Flutter, to name a few. These each come with their own pros and cons, but they all serve the same purpose: allowing you to create “native” cross-platform apps relatively painlessly and cheaper than building out native apps specific to each platform.

What I intend to explore through this series of blog posts is not “how can I make a cross-platform app the easiest possible way?”, but rather “how can I make a cross-platform app in the most robust possible way?” I’m not looking for shortcuts or easy-ways-out, and I want to see just how much logic for the app I could share between multiple platforms. In order to keep things interesting, I’m going to start with just Android, and eventually branch out to iOS, but I want to take things a step further and port the app over to a desktop platform using the same underlying code and only rewriting the UI layer each time.

Naturally, there are several languages one could use given the constraints I’ve defined, but I want my code to last. I’m aiming for the longest usability, so I don’t want to worry about the language evolving too much and rendering my code obsolete or so old that it’s painful to refactor. As such, my language of choice is C. I’m hoping C will give me the greatest amount of flexibility and control over the implementation details so that I can bring the internal libraries I write to any new platform with just the addition of the UI logic in the native language for the given platform and minimal C bindings in the event that I’m working with a language I haven’t previously worked with. If I’m being totally honest I’m also looking for an excuse to work with C a bit more so this is knocking out two birds with one stone 😛

I don’t imagine this will be quick nor painless so I won’t make any guarantees for how often I’ll post an update, but I’ll try to write something up any time I run into something interesting. Additionally, I’ll keep the code open source so that I can get some feedback from the community and share my findings! Here’s to not taking the easy way out!

Disclaimer: I don’t mean to diminish the work done by developers of other cross-platform frameworks like Flutter, React Native, or Xamarin, nor am I implying that their jobs are easy. Development with these frameworks are all as respectable as development in any other domain. At the end of the day, we’re all just trying to solve problems with computers.

Version 0.7.0 for Simple Markdown brings some major changes to the app. Most notably, and likely most requested, is DARK MODE. With Android Q’s built-in implementation coming soon, I figured it’s about time to finally get to it. It’s something that’s been on the roadmap for a while but now it’s finally complete.

With the dark mode rollout, I figured a redesign would be appropriate as well. I’ve been thinking about moving the app bar to the bottom of the screen for a while now, especially since devices keep getting taller and taller. I played around with the idea a bit and eventually settled on a design that I liked. I hope that you all like it as well.

Finally, there were numerous other changes that had to be made in order to accommodate Android Q. Namely, scoped storage. Since I wouldn’t have direct access to the file system anymore, the old file explorer had to go. It was kind of buggy and didn’t always work the way I had hoped so I opted to just use the system file explorer instead. I’m hoping this won’t cause any major issues for anyone, but please let me know if something isn’t working anymore. One advantage of this move howver is that you can now save to or open from your cloud file providers directly, without having to manually move the files around. So I suppose it’s not all bad.

The file system access required me to rework the autosave feature as well. Since I would no longer have access to just directly write the file back to the device, I now just save it locally and when you attempt to create a new file, I’ll prompt you to save or discard your changes.

Lastly, the default root directory option has been removed, since I no longer have control over the file system. I’m hoping this wasn’t a highly used feature but I don’t really have any analytics to tell me so I might have to change that in order to avoid pulling out potentially highly used features.

To wrap things up, I’ve attached a few screenshots to the bottom of this post. Please don’t hesitate to reach out if you have any feedback or would like to contribute!

The old edit pane
The old preview pane
The new light mode edit pane
The new light mode preview pane

The new dark look:

The new dark mode edit pane
The new dark mode preview pane

This month, I’ve started a new position as an Android Engineer at American Express, and I found myself reflecting on how I got here. I have no college degree, and didn’t participate in any coding bootcamp or anything like that, so I figured I’d share my story in case it’s helpful to anyone else out there looking to get into development. For me, things started when I was still in elementary school, around age 10 or 11. My parents upgraded the family computer and gave me the old one to use as my personal computer, which I mostly used for gaming. Back then the games I spent most of my time playing were Star Wars games, like Republic Commando, Knights of the Old Republic, and Battlefront, which to this day are my favorites. I liked these games so much that I started looking into their online communities, and quickly discovered modding and level building. While I never got very far into those, I did start to get into basic web development with HTML, CSS, and JavaScript, to build and modify custom pages for my various interests. Admittedly, most of the “coding” I was doing back then was just copying and pasting snippets I ran across on the internet, with little-to-no modifications of my own. My curiosity grew though, and by middle school, around age 12 or 13, I was hanging around on some hacking forums. My parents weren’t very big fans of this though, so my computer use was dialed back a bit and I took a long break from anything beyond simple gaming.

Fast forward a few years, after I graduated high school, and the story continues. I planned on studying computer sciences in college, but wanted to travel a bit and so I decided to take a gap year and go down to Mexico for a little while. My girlfriend at the time (who is now my wife), is from there, so I moved in with her family so that we could spend some more time together. Shortly thereafter I started teaching English to make some money, but I wasn’t very fond of the work at all. Thus, while working as an English teacher, I spent much of my free time learning how to code. It’s been a couple of years since then, but I’ll try to link some of the resources I mention where I can find them.

Initially, I wanted to learn how to do Android development. I had an Android phone and an Android tablet, and was fascinated at the idea of being able to build apps that could run on them. Java was the only supported lanugage at the time, so I attempted to learn it with a book called Head First Java. The book specifically states that it’s not meant for beginners, and won’t cover basic programming concepts, but for some reason I thought I could work through it regardless. Much of the book didn’t make sense to me, and I quickly got discouraged and almost dropped programming altogether. Instead though, I googled around a bit and found that many people recommended Python as a good first language to learn, due to its vibrant community, relatively natural syntax, and beginner-friendly content. I decided to try again, this time with Python.

One of the resources I found that was recommended for learning Python was Learn Python the Hard Way, by Zed Shaw. The book was free to read online (and may still be), so I naturally started there. Alongside the book, I also relied on a website called CodeCademy, which had a pretty great free course on Python (and it may still be free). Once I had finished these beginner materials though, I quickly ran into the issue of what now? I couldn’t just do tutorials forever, and so I needed to find a new way to challenge myself. I found some people on online forums suggesting coding challenges to force you to try new things, and so I did a few of those, like writing a tip calculator or a pretend store (think like in a video game) or things like that, but I eventually decided to work on a personal project to try something a little larger. I settled on writing a workout generator.

After a few months of learning about development, a coworker that had a small business asked if I could help him take his website off of Wix, and reproduce it on another host. This was technically my first paid gig as a developer, and it helped me get my foot in the door to development as a career. The website is no longer live, and I don’t have any archives of it, but it was quite simple, basically a static website written in pure HTML/CSS, with minimal JavaScript. I didn’t really know what I was doing but I sort of figured it out as I went. Once that project was complete, I knew that I wanted to do that kind of work full time. I had no degree, basically no experience, didn’t really know what I was doing, but I was determined to make it happen. I searched all over the job posting sites like Indeed, Monster, and even Craigslist, applying to jobs and offering to accept minimum wage payment if they’d just give me a shot. Eventually, one of the places I applied to gave me an interview for a PHP developer position. I had never written a line of PHP, but after doing a couple of interviews and a basic CSS change to a Ruby on Rails app as a sort of coding challenge, they gave me the position and paid me $10 an hour for fulltime work. This was my first big break into the tech industry, and really what helped launch my career into development.

Nearly a year into my first development job, I got a bit arrogant because I’m a quick learner, and felt that I should be paid more. I attempted to negotiate a raise but didn’t get as much as I had hoped for, so I lined up another job as a freelancer and left. Over the course of the next few years, I took on several different jobs, doing mostly web development with PHP, but also branching out into Linux system administration from time to time. I wasn’t a huge fan of doing web development though, and so I decided to try getting back into Android development. I knew a bit more about general development terms and methodologies, and figured I could try giving it another shot. I worked through a few of the chapters in the Head First Java book, but then switched gears to focus more on Android specifically, by enrolling in a free course by Udacity for building Android apps. After completing the course, I built a simple number guessing game to practice the new skills I had learned. I also went and got the Associate Android Developer certification by Google. With a sample of my work, and a new certification, I felt confident enough to take on contract work as an Android developer. I did all sorts of odd jobs, from live wallpapers, to dating apps, and anything in between. Sometimes I was responsible for starting a new project from scratch, and other times I was only hired to do some bug fixes or add a new feature.

My journey began back in 2015, and now, 4 years later, in 2019, I have started as an Android Engineer at American Express. I still haven’t gone to school for computer sciences, but I do take the time on my own to study things like algorithms and data structures and other sorts of topics that I would have otherwise learned there. If I could go back in time, I probably would have tried to find a way to study, because I’m quite certain it would’ve made my life a lot easier when I was looking for jobs and I would’ve hopefully made fewer mistakes along the way, but I’m happy to be where I am regardless and I don’t really have any major regrets. I would caution anyone who wants to attempt to go down the route I did though, for the very reasons I just stated. You will have a much easier time finding work, keeping work, and negotiating pay if you have a degree. I hear college is also quite a bit of fun for most people.

If you are unable to go to college though, then please don’t hesitate to reach out to me if you have any questions or just need help fixing a bug. I don’t have a ton of free time but I would be more than happy to lend a hand to someone who needs it. I owe much of my success to online forums and friends I’ve made along the way, so I’d be happy to be there for others who need help getting started as well. With that said, best of luck and happy coding!

If you’ve been following my blog, you probably saw the post where I outlined my personal git server setup. In it, I showed off the ls command that I’ve configured for quickly viewing my repositories remotely. This is quite limited though, in that it only shows me the names of the repositories, without any further details or even being able to see the files within the repositories unless I clone them locally. Luckily, there’s a much better way to view your repositories remotely, via GitWeb. Better yet, getting up and running is super simple on Ubuntu. In my use case, I’m using Apache, so the first step is to install both packages.

# apt install apache2 gitweb

With the packages installed, we’ll need to enable CGI scripts, which are what power GitWeb

# a2enmod cgi

Next, we’ll need to create a virtualhost config for the (sub)domain.

# cat <<EOF > /etc/apache2/sites-available/gitweb.conf
<VirtualHost *:80>
    ServerName gitweb.example.com
    DocumentRoot /usr/share/gitweb
    <Directory /usr/shar/gitweb>
        Options +ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch
        AllowOverride All
        order allow,deny
        Allow from all
        AddHandler cgi-script cgi
        DirectoryIndex gitweb.cgi
    </Directory>
</VirtualHost>
EOF

By default, gitweb is going to look for your git repos in /var/lib/git, so if you’ve got your repos stored elsewhere, you’ll need to open up /etc/gitweb.conf and set the correct value for the $projectroot variable there. Once that’s all ready to go, make sure you enable the site…

# a2ensite gitweb

… and start up Apache.

# systemctl start apache2

You can then browse to the domain you’ve configured and you should see a listing of your repos, along with some other helpful information like who was the last person to push a commit, and when.

Bonus: Restricting the Site to Authorized Users

If you don’t want to open up the site to the public, you can restrict it with HTTP Basic Auth. To do so, replace the following lines in the /etc/apache2/sites-available/gitweb.conf file:

order allow,deny
Allow from all

with the following lines:

Order deny,allow
AuthType Basic
AuthName "Restricted Access"
AuthBasicProvider file
AuthUserFile "/path/to/.htpasswd"
Require valid-user

Don’t forget to set up the htpasswd file though:

# htpasswd -c /path/to/.htpasswd your-user

And that’s it! Feel free to reach out if you have any tips on how to make this any better, I’m always on the lookout for new tips and tricks!

If you’re writing code for any purpose, I sincerely hope you are using some kind of source control. I personally am only really familiar with Git, and as such it is my go-to tool for keeping track of my code. Nowadays, there are plenty of hosts for pushing your code to some remote server to keep it in a safe place, but what if you don’t want to do that for some reason? Hosting your own git server is certainly an option, and isn’t even all that complicated. In this post, I’m going to detail my home git server setup, which is probably quite a bit simpler than most people’s home git server setups, but it suits me fine as I’m the only one who uses it. If you’re looking for a more complete solution with a web interface (I’ll come back to this point in another post), advanced user management controls, or support for Git LFS, then the solution I propose here won’t be sufficient for you, and I’d instead advise you to take a look at some other self-hosted options like GitLab or Gitea, both of which are excellent options that I myself have used in the past on my home server. As for me, I’m content with just the basics over SSH.

Before we get started, I’ll go ahead and cite my source for the inspiration for this setup, the Pro Git book by Scott Chacon and Ben Straub. There’s a lot of great content in there, and it’s totally free, so I highly recommend checking it out. I’m also going to be using a simple Debian 9 docker image for the tutorial, and I’d recommend attempting to set up the git server in a similar fashion, using a throway VPS or virtual machine, before attempting to set up a real server to be used as a git server. A simple Raspberry Pi is most likely sufficient for this, though I’ve yet to test that myself. Without further ado, let’s begin.

Setting up the server

I’ll start by firing up my test server. I’ve simply mapped port 22 on my local machine to port 22 on the Docker container for convenience, but you could map any port you so desire. Note that if you wanted to keep using the docker container afterwards and you wanted to persist any of the data you plan to push to the server, you’d need to set up volumes. Also note that using Docker for this is entirely unnecessary, and I don’t actually use it myself on my home git server.

$ docker run -p 22:22 -it debian:9 /bin/bash

Once that’s ready, we can start setting things up. I like to follow the convention that larger hosts such as GitHub have set by using git as the SSH user. I won’t get into setting up HTTP(S) access in this tutorial, as I myself only use git over SSH.

# useradd -md /home/git git

In case your unfamiliar with the above command, it’s just adding a user called git and creating and assigning a home directory for it at /home/git. This will be the user we connect to the server with, so we can run commands like git clone git@my-server. With the git user created, we’ll need to set up SSH access.

# mkdir /home/git/.ssh
# chmod 700 /home/git/.ssh

OpenSSH is a bit particular about the access for the ~/.ssh directory, thus it’ll need to be set to 700, or read/write/execute for the owner of the directory, and no access for anyone else. Next we’ll need the authorized_keys file.

# touch /home/git/.ssh/authorized_keys
# chmod 600 /home/git/.ssh/authorized_keys

Remember to set the permissions for the authorized_keys file as well, to 600, or read/write for the owner, and nothing for anyone else. If you’ve done this all as the root user, don’t forget to make the owner git.

# chown -R git:git /home/git

Then, using vim, nano, emacs, echo, or whatever you fancy, add your public key to the /home/git/.ssh/authorized_keys file. As a side note, If you happen to have multiple people using the git server, and you wanted to have a little more control over who has access to what, you could create different users on the server for each person you wanted to give access to, and then grant access to repos using the Linux filesystem permissions or something along those lines, but I’ll defer to te aforementioned Pro Git book, as I myself don’t share my private git server with anyone. Back to the topic at hand, we’ll need to make sure we’ve got both the OpenSSH server and Git installed on the machine

# apt update
# apt install -y openssh-server git

With the necessary packages installed, let’s fire up the SSH server.

# service ssh start

Then, from another machine (or in my example, the host machine), we can check the connection.

$ ssh git@localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:6A3OqYSNXdrt8gPNrdpCxSVP8UHKy1QYhOq7wB9120c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Linux 2caac8f62d55 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
$

And we’re in. Time to set some things up.

Customizing the Shell

I don’t necessarily want full SSH access for my git user, as I already have another account on the server for myself, so I’m going to restrict the shell for the git user. Back at the root shell for the server, I’ll set the default shell for the git user to git-shellgit-shell is pretty neat, and I’d encourage you to read more about it, but for this tutorial I’ll just cover a few of the basics. First, let’s set the shell for the git user.

# usermod -s /usr/bin/git-shell git

Now, if we attempt to SSH in to the machine again, we should get a different message:

$ ssh git@localhost
Linux 2caac8f62d55 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Feb 16 20:26:16 2019 from 172.17.0.1
fatal: Interactive git shell is not enabled.
hint: ~/git-shell-commands should exist and have read and execute access.
Connection to localhost closed.

Note these two lines here:

fatal: Interactive git shell is not enabled.
hint: ~/git-shell-commands should exist and have read and execute access.

If you take a look at what git-shell does, it’s basically a super limited version of a shell that only allows you to run preconfigured scripts, placed in the ~/git-shell-commands directory. Back as the root user, let’s set this up now.

# mkdir /home/git/git-shell-commands
# chown git:git /home/git/git-shell-commands

I’m going to be adding a couple of files now so that the git-shell doesn’t seem entirely useless. Anything we place in this directory will be offered to the user as an option to run, so I like to have a couple of convenience scripts in there that provide similar functionality to what a web host would offer.

The Special help Script

Upon connection, if you have a script called help in your ~/git-shell-commands directory, it will be run automatically. This would be a good place to put the names and descriptions of all the available commands the user can run. Here’s what my /home/git/git-shell-commands/help script looks like:

#!/usr/bin/env bash

cat <<EOF
Welcome to the Brawner home private git server. The available commands are listed below:

delete [REPOSITORY_NAME]                           - delete a repository
ls                                                 - list the repositories
mirror [REPOSITORY_URL]                            - create a mirror of a repository on another server
new [REPOSITORY_NAME] 			                   - create a new repository
rename [OLD_REPOSITORY_NAME] [NEW_REPOSITORY_NAME] - rename a repository

EOF

If you add that file, make sure to set the permissions on it correctly:

# chmod +x /home/git/git-shell-commands/*

Then, when you connect to the server via SSH, you get this:

$ ssh git@localhost
Linux 2caac8f62d55 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Feb 16 20:28:51 2019 from 172.17.0.1
Welcome to the Brawner home private git server. The available commands are listed below:

delete [REPOSITORY_NAME]                           - delete a repository
ls                                                 - list the repositories
mirror [REPOSITORY_URL]                            - create a mirror of a repository on another server
new [REPOSITORY_NAME]                              - create a new repository
rename [OLD_REPOSITORY_NAME] [NEW_REPOSITORY_NAME] - rename a repository

git>

Instead of having our connection dropped, we’ve now got this git> prompt where we can enter commands. We haven’t added anything just yet, so let’s start with the new command, so that we can create new repos.

Creating New Repositories Remotely

Here’s what my version of this looks like:

#!/usr/bin/env bash

if [[ -z "$1" ]]; then
    echo "Please enter a repository name"
    exit 1
fi

if [[ -d "$1.git" ]]; then
    echo "Repo $1 already exists"
    exit 1
fi

/usr/bin/git init --bare "$1.git"
/usr/bin/git -C "$1.git" config http.receivepack true
echo "Successfully created repository $1"

It’s a pretty simple script, but let’s run through it just to make sure it’s all clear. I first want to make sure that I’ve gotten a name for the new repository, and exit the script if it’s missing.

if [[ -z "$1" ]]; then
    echo "Please enter a repository name"
    exit 1
fi

If I’ve got a name for it, then I want to make sure I’m not going to try to create a repository that would conflict with another file or folder on the server. Git does this check itself but I like to customize the error message for myself.

if [[ -d "$1.git" ]]; then
    echo "Repo $1 already exists"
    exit 1
fi

We then create a new empty repository…

/usr/bin/git init --bare "$1.git"

… and allow for remote push events.

/usr/bin/git -C "$1.git" config http.receivepack true

Lastly we echo out a little success message.

echo "Successfully created repository $1"

Let’s add this to the server and give it a shot. Don’t forget to add the executable flag to the file, or it won’t work.

git> new my-repo
Initialized empty Git repository in /home/git/my-repo.git/
Successfully created repository my-repo

We now have an empty git repo we can clone. Back on the host machine, let’s give this a clone to see that it’s working.

$ git clone git@localhost:my-repo
Cloning into 'my-repo'...
warning: You appear to have cloned an empty repository.

Awesome! Let’s add a file and push it.

$ cd my-repo
$ echo "# My Repo - A Git Server Test Repo" > README.md
$ git add README.md
$ git commit -m "Add README"
[master (root-commit) 545d3b7] Add README
1 file changed, 1 insertion(+)
create mode 100644 README.md
$ git push
Counting objects: 3, done.
Writing objects: 100% (3/3), 248 bytes | 248.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To localhost:my-repo
* [new branch]      master -> master

Everything seems to be working just fine! We were successfully able to clone, commit, and push. Let’s take a look at some of the other commands I like to use on my git server. Creating repos is nice, but it’s good to keep track of the ones we have too. For that, I’ve created the ls command.

Viewing Existing Repos

You might think I’m simply wrapping the ls command that comes built-in, but like any good engineer, I had to overengineer the problem. I don’t want to see just any file, I’m looking for git repos. So, here goes:

#!/usr/bin/env bash

for repo in $(/usr/bin/find -maxdepth 1 -type d -name "*.git"); do /usr/bin/basename $repo .git; done | sort

This one’s a bit shorter, and if you’ve put that in the right place and set the permissions correctly, running it should produce the following:

git> ls
my-repo

We’ve only got one repo, so the output is a little weak, but you get the idea.

Renaming Repositories Remotely

my-repo sounds a bit generic, and this was a test repo, so let’s name it accordingly. test-repo sounds so much better! For this, I’ve got the renamecommand. Here’s the script:

#!/usr/bin/env bash

if [[ -z "$1" ]]; then
    echo "Please enter a source repository name"
    exit 1
fi

if [[ -z "$2" ]]; then
    echo "Please enter a destination repository name"
    exit 1
fi

if [[ -d "$2.git" ]]; then
    echo "Repo $2 already exists"
    exit 1
fi

mv "$1.git" "$2.git"
echo "Successfully renamed repository $1 to $2"

We can break this one down pretty quickly too. We first want to make sure we have the source (or old) repository name…

if [[ -z "$1" ]]; then
    echo "Please enter a source repository name"
    exit 1
fi

… as well as the destination (or new) repository name…

if [[ -z "$2" ]]; then
    echo "Please enter a destination repository name"
    exit 1
fi

… and we don’t want to try to rename a repository to an existing repository name…

if [[ -d "$2.git" ]]; then
    echo "Repo $2 already exists"
    exit 1
fi

… but if that’s all good, then we go ahead and rename the directory.

mv "$1.git" "$2.git"
echo "Successfully renamed repository $1 to $2"

I don’t like to repeat myself so I exclude the .git from my repository and just have the script append it for me. Let’s give this one a go.

git> rename my-repo test-repo
Successfully renamed repository my-repo to test-repo
git> ls
test-repo

Another successfully working command.

Cleaning Up Unneeded/Unwanted Repos

With the testing out of the way, it’d be nice to be able to remove this repository. For that, I’ve got the delete command.

#!/usr/bin/env bash

if [[ -z "$1" ]]; then
    echo "Please enter a repository name"
    exit 1
fi

if [[ ! -d "$1.git" ]]; then
    echo "Repo $1 doesn't exist"
    exit 1
fi

rm -rf "$1.git"
echo "Successfully deleted repository $1"

I imagine by now this doesn’t need much explanation, so I won’t get into the details, except to say that I make sure I have a repo name to delete, and make sure that it exists, and then delete it. You could probably make this a bit more advanced by moving it to some sort of trash bin for 30 days before deleting it, but I’m not that cautious with my personal projects and when I decide it’s time for something to go, it’s time for it to go. Let’s clean up this test repo.

git> delete test-repo
Successfully deleted repository test-repo
git> ls

Tracking Other Repos

Most hosts like GitHub also allow you to mirror other repositories, and that’s a feature I tend to use quite a bit myself. To make this easier on my local git server, I use the mirror script.

#!/usr/bin/env bash

if [[ -z "$1" ]]; then
	echo "Please enter a repository URL"
	exit 1
fi

/usr/bin/git clone --mirror "$1"
echo "Successfully created a mirror of $1"

This one shouldn’t need much explanation, so let’s give it a run.

git> mirror https://github.com/wbrawner/SimpleMarkdown
Cloning into bare repository 'SimpleMarkdown.git'...
remote: Enumerating objects: 107, done.
remote: Counting objects: 100% (107/107), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 1832 (delta 33), reused 75 (delta 31), pack-reused 1725
Receiving objects: 100% (1832/1832), 783.26 KiB | 638.00 KiB/s, done.
Resolving deltas: 100% (891/891), done.
Successfully created a mirror of https://github.com/wbrawner/SimpleMarkdown
git> ls
SimpleMarkdown

Now, with mirroring there’s an extra step involved. You need to set up a cron job to ensure that your mirrors always stay up-to-date. On most Linux systems, cron should be installed and ready to go. The same is not true for our Docker container though, so we’ll need to set that up real quick (and don’t forget to persist this should you decide to use the Docker setup permanently).

# apt install cron
# service cron start

With cron ready to go, we can add a line to our crontab with this command:

# crontab -eu git

The line to add is as follows:

0 * * * * for repo in $(/usr/bin/find /home/git -maxdepth 1 -type d -name "*.git"); do /usr/bin/git -C $repo remote update; done

Just make sure you’ve got an empty line after that. This will run through each of your mirrored repositories and pull any updates once per hour.

Wrapping Up

Congrats, you now have your own private git server! I may write some more posts later about setting up a simple web interface and getting a Git LFS setup to work here as well. Please reach out to me if you see me doing something horribly wrong or inefficient, or if you have other useful scripts that you like to use for your self-hosted git server.

Recently, while working on a job, I ran into an interesting problem with Spring Boot: the configuration files I had defined using the @PropertySourceannotation were being overridden by the application.properties file, which I had also defined with a @PropertySource annotation. Why wasn’t Spring prioritizing my configuration files correctly?

What I was attempting to accomplish was shipping my jar with some default configuration options that were mostly tailored for my local development setup, and then being able to specify an external configuration file for staging and production that would override the values accordingly. I got the idea from a project I used from GitHub called Acrarium, but unfortunately I didn’t quite take the time to slow down and understand what was going on before attempting to implement something similar myself.

I came to the realization that something was wrong because while testing deployments with some docker containers locally, my Spring Boot app was crashing since it couldn’t connect to the databse. I spent hours trying to figure out why Spring Boot container couldn’t connect to the MySQL container, going as far as installing the MySQL command line client to manually verify that the containers were in fact connected and able to communicate with each other. That of course wasn’t the problem, so I took to StackOverflow to find some answers.

I discovered that you can pass in an argument to the application while running it as a jar, specifically -spring.config.location, and while it fixed my issue, I wasn’t satisfied with this because I still didn’t understand what the original problem was, and I wasn’t content to just shrug my shoulders and move along. It was only through searching around a bit more that I stumbled upon the Spring Boot documentation page for externalized configuration when I realized my mistake. If you look towards the bottom of this list, item #15 at the time of writing, is the application.properties file and following it with one level lower in priority for Spring, is the @PropertySource annotation configuration files.

The solution, then, was to rename my application.properties file to something else, and add that new file inside another @PropertySource annotation on my Application class, and problem solved! Yet another case of the problem lying between the chair and keyboard. Had I read through the source a little closer with the Acrarium project, from which I took the inspiration for this setup, I would have noticed that they weren’t using the application.properties file to define overridable defaults, but rather a default.properties file. Lesson learned here: slow down and read the source, make sure you understand it and can explain what’s going on, and then try to imitate it in your own works.

Working with a database is pretty much a given for most of the projects I’ve worked on lately, which means that in order to get any work done locally, I’ve had to install a MySQL server, configure it, add the users and databases for each project, and grant the permissions accordingly. Because I didn’t want to have to go through the hassle of installing MySQL and then writing a bunch of SQL commands each time I needed to create a database + user with permissions for a new project, I decided to do things a little unorthodox and use Docker instead. The official MySQL Docker image does a lot of the setup for you with just a couple of environment variables, which makes it pretty convenient to quickly get a server up and running without much fuss. Here’s how it works:

You’ll first need to pull the MySQL image. I’ve chosen version 5.7, but you can use any of the versions listed on the Docker Hub page.

docker pull mysql/mysql-server:5.7

This’ll probably take a minute while the image is downloaded. While that’s downloading, you can optionally set up some nice shortcuts to quickly start and stop the different servers for your apps. Just add this to your shell profile file and I’ll break it down in a second. On macOS this would be the ~/.profile file and on Linux it’s usually the ~/.bashrc file.

start-mysql()
{
    APP_NAME="$1"

    if [ -z "$APP_NAME" ]; then
        echo "Please provide an argument for APP_NAME";
        return 1;
    fi

    docker run \
        -d \
        --rm \
        -p 3306:3306 \
        -e MYSQL_ROOT_HOST=172.17.0.1 \
        -e MYSQL_DATABASE="$APP_NAME" \
        -e MYSQL_USER="$APP_NAME" \
        -e MYSQL_PASSWORD="$APP_NAME" \
        -v "$APP_NAME-mysql":/var/lib/mysql \
        mysql/mysql-server:5.7
}

stop-mysql()
{
    docker stop $(docker ps | grep mysql | grep 3306 | cut -f1 -d' ')
}

dmysql()
{
    APP_NAME="$1"

    if [ -z "$APP_NAME" ]; then
        echo "Please provide an argument for APP_NAME";
        return 1;
    fi

    mysql -h127.0.0.1 -u"$APP_NAME" -p"$APP_NAME" "$APP_NAME"
}

The above snippet basically creates three new commands you can run on the command line: start-mysqlstop-mysql, and dmysql (shorter than docker-mysql). In short, start-mysql and dmysql take in a parameter called APP_NAME, which will be used as the database name, user, and password. Obviously this is horribly insecure but it’s probably fine for your local development environment. The first couple of lines for each method here:

APP_NAME="$1"

if [ -z "$APP_NAME" ]; then
    echo "Please provide an argument for APP_NAME";
    return 1;
fi

… are just checking that the APP_NAME variable isn’t empty, or aborting if it is. Moving on, we can break down the next part of the start-mysql method. We’re going to use docker to run a container image…

docker run \

… and detach from it once it’s started up. You can exclude this flag if you want to keep the logs open while the MySQL server runs (though you could also do that with docker logs -f):

    -d \

We’re also going to destroy this container once we stop it. Since the data is persisted outside of the container, there’s no need to keep it around:

    --rm \

MySQL by default runs on 3306, so we’ll need to allow Docker to expose that port to us (though it won’t be exposed to the rest of your network unless you’ve done extra configuration on your computer):

    -p 3306:3306 \

And then comes the MySQL configuration. This first line allows you to connect to your MySQL server from outside of the Docker network. You see, by default, MySQL will only allow connections from localhost. Since the MySQL server is technically running on a different server (a VM running on top of your computer), you’re not on localhost anymore, and you’ll need to allow connections through the Docker NAT.

    -e MYSQL_ROOT_HOST=172.17.0.1 \

Next, we set up the default database, user, and password using the APP_NAME parameter that was passed in earlier for all three values. As I mentioned before, this isn’t secure, so please don’t do this in production:

    -e MYSQL_DATABASE="$APP_NAME" \
    -e MYSQL_USER="$APP_NAME" \
    -e MYSQL_PASSWORD="$APP_NAME" \

Lastly, we configure a volume to persist the data between starts/stops. If your app handles schema creation on startup, or you don’t want to persist data between server restarts, then you can skip this line and it’ll work just fine. If you ever need to delete this volume, it’s just named after the value given for APP_NAME plus a -mysql suffix.

    -v "$APP_NAME-mysql":/var/lib/mysql \

Lastly we tell Docker which container image we want to run.

    mysql/mysql-server:5.7

And voilà, we now have a quick command to start up a MySQL server that is independent of any other servers for other apps we’re working on locally. To connect to this server with the command line, you can just use the dmysql command if you’ve so chosen to add it, or type it out each time. To connect from your application, say Spring Boot, to a database called “test”, your application.properties file would look as follows:

spring.datasource.url=jdbc:mysql://localhost:3306/test
spring.datasource.username=test
spring.datasource.password=test

Finally, the stop-mysql command does what you would expect, but this is the only of the three that doesn’t require any arguments. Since you can only have a single MySQL server running at a time on port 3306, I have chosen to just stop whatever other server I have running in order to start up a new one. stop-mysql simply finds a MySQL container image running on port 3306 and stops it.

That is all for now! If you’ve got other tips and tricks for getting your local development environment set up quickly and painlessly, don’t hesitate to reach out to me and share them!

Within the community of Linux users and fans, there’s a term called “distro hopping”, which refers to the act of constantly jumping around between different distributions, or “distros”, of Linux. A few examples of popular distributions are Ubuntu, Debian, Red Hat Enterprise Linux (RHEL), or Fedora. Back in the days when I ran Linux on my desktop as my primary OS, I was very much guilty of distro hopping, having tried out pretty much all of the popular distros at least for a short while. These days I’m using a MacBook as my primary computer, and thus macOS as my primary OS, so I haven’t been doing much changing around of my desktop environment, though I have undergone a few changes with my blogging platform over the past couple of years. So I guess you could say I’ve been blog platform hopping a bit 😛

When I first got started with blogging, my website was originally running on WordPress. WordPress is great as a tool for blogging, but I found that for me personally, it wasn’t worth the trouble. You see WordPress powers a lot of the internet. I usually see estimates that vary quite a bit, though they generally say that somewhere around 60-80% of all websites run on WordPress. I think it’s safe to say that WordPress is the single most popular website platform for the time being. With this broad popularity comes some pretty great advantages: the tooling, support, and availability of up-to-date tutorials and how-tos for getting things done are all pretty stellar. If you want to do something with WordPress as your backend, odds are someone has already at least started it or you can string a few plugins together to get close to what you need. Is that ideal? Probably not, but if you’re on a strict time or cost budget, or don’t need much more than a simple company website and/or blog, then WordPress is a great option for you. However, being the most popular platform for websites comes with some drawbacks too. I found that I was constantly having to harden and monitor my site because of countless attempts to hack into it, sometimes causing my whole site to crash or making it difficult to tell which posts of mine were actually interesting and which were just being perused by curious script kiddies. I was also dissatisfied with the performance of WordPress. I was probably a bit overly picky on that one, as the performance was just fine for a blog for just me, with super low legitimate traffic, but it still bothered me. I had often considered building out a single page application to just use my WordPress site as a back end to the super fast SPA, but I’m not a huge fan of JavaScript and I get a bit tired of learning some new framework or tool every year because those JS people love to change their minds on what the cool new thing is. Alas, I came upon another solution: static website generation.

It seems to me like static websites made a bit of a comeback in the past couple of years. With the rise of things like GitHub Pages, I find that today there are almost more static site generators than there are full-blown CMSs. Thus, I turned here to solve my problems. Static sites are by no means unhackable, but they’re certainly more difficult to compromise than a database-powered site. Not only that, but the performance is almost always better than that of a typical CMS which sometimes has to make multiple database requests in order to serve up a given page. Static sites aren’t perfect either though, and they certainly come with their own drawbacks as well. I began using a tool called Jekyll, which as I understand is the tech powering GitHub pages. Jekyll is a static site generator written in Ruby. I myself know little-to-nothing about Ruby, or its tooling, so I always found the setup a bit clunky and difficult. I also found it a bit annoying that I needed to install a bunch of dependencies like ruby, bundler, and gem to be able to produce my site. These dependencies would go on to just take up space on my computer as I didn’t use them for anything else. I didn’t really want to get too much into the template engine syntax (I think it’s liquid, but don’t quote me on that) either, so I ended up just grabbing a pre-existing theme and making minor CSS edits to it. All in all, my site felt sort of generic to me, and certainly not a great showcase of my skills and knowledge. I think it’s fairly obvious as well, as I made the switch sometime in late 2017, and had a stark decline in the number of posts I published throughout the rest of 2017 and all of 2018. I decided this needed to change though, so I had to make a choice: either figure out this ruby stuff and get comfortable with building my own unique site with a comfortable composing experience, or drop it altogether and replace it with another blogging platform. I went with the latter.

Initially, I was planning on just going back to WordPress. The writing experience in WordPress is excellent in my opinion, and I love being able to start a post, save it as a draft, and pick up from any device with an internet browser. I often like to start posts from my phone, usually as just a few bullet points or a couple of sentences to get some ideas in writing, and then elaborate on them once I’ve got a full keyboard at my disposal. With a static site, this is a bit more cumbersome, as I just have a series of files stored in a git repository, and I don’t particularly enjoy trying to manage git repos from my phone. Upon setting up my old WordPress site for reuse however, I discovered that the new Gutenberg writing system from WordPress only supports a limited set of markdown syntax without the use of plugins. On my site, I couldn’t even get any of the markdown to be recognized though, so all of my posts looked terrible in their published state. I don’t really want to go back and rewrite all of them to use HTML, nor do I really want to bother with attempting to import all of them and verify that the styles are good, so I abandoned this idea. I wasn’t thrilled about the maintenance headaches anyways, so I think it was a good choice for me.

With WordPress out of the picture again, I took a look at Pelican. Pelican, much like Jekyll, is a static site generator. Unlike Jekyll, Pelican is written in Python. I found this to be quite nice, as Python was the first real programming language I learned and used, and I had a bit of experience with Flask, so I was comfortable with the Jinja templating engine as well. Or so I thought. It’s actually been about 3 years since I regularly used Python for anything more than simple scripts, so I quickly found it awkward to do anything meaningful with my templates. Seeing as I don’t intend on becoming a full time Python developer, nor do I want to maintain any Python-related code for any reasonable amount of time, I ditched Pelican and moved on. Pelican seems great, and I have no complaints about it, but I’d prefer to use more tools that I use daily if I’m going to be needing to update my site even yearly.

For the past 2 years or so, my language of choice has been Java. Coming from a background of Python, PHP, and JavaScript, I found typed programming languages refreshing, and compiled ones even more so. I can have my errors checked before I even run my code? Excellent! I’m sure that using Java all day every day for the past 2 years has also contributed to my growing affection for it, as I at least tend to get comfortable with my tools and then don’t particularly like to change them much. Since I still wanted to stick with static site generation, I took a look around for a Java-based static site generator. Luckily for me, I found JBake, which, at the time of writing, is still under active development. I was quickly able to put together a site using Thymeleaf as the templating engine, and leaving my posts as they are in markdown, with just a simple search and replace to fix some of the liquid syntax from the Jekyll blog. To be fair, I didn’t know even a little about Thymeleaf when I first began, but I was very keen to learn it as it seems to be quite a popular template engine for Java-based websites. I’m glad I did too, as I quickly came to enjoy working with it, and it’s been well-worth the investment since I’ve returned to working as a freelancer. I’ve landed at least 2 large contracts (large for me at least) so far where knowledge of Thymeleaf was a must. I happily made my own simple theme, that while currently a bit bland, offers me the opportunity to learn and experiment with tech that’s more relevant to my career. I plan to spice things up a bit with the design while still maintaining the simplicity where possible, so keep an eye on this space 🙂

There’s still one more problem I haven’t yet discussed with static websites that bothers me a bit – the lack of interactivity. Naturally, since there’s no database from which the contents of my posts are served, there’s no way to search the posts. On my Jekyll site I had a clever solution that I unfortunately can’t take credit for that involved searching through the site’s XML feed, though it always felt like a bit of a hack, and a limited one at that, as if I implemented paging in my XML feed, then that would have needed a fair amount of refactoring. Another related issue was the inability to have a contact form. I previously just used a Google form to get around this, but again, it was an ugly hack that to be honest I’m never sure even worked. I have some ideas to get around these two problems, but nothing I’ve finalized yet. As soon as I come up with something, I’ll be sure to publish a post about it though 🙂

To wrap things up, I just want to clarify that I think WordPress, Jekyll, Pelican, and JBake are all great solutions to a common problem. There’s also nothing wrong with PHP, Ruby, Python, or JavaScript. My decisions to change my blogging platform of choice were based entirely on personal preference, and I don’t want anyone to feel like I’m attacking their project or favorite tool or anything like that. I think there’s enough tool-shaming in the developer community and it’s completely senseless in my opinion. I’ll save that rant for another blog post though. Until now, please feel free to reach out to me with your comments, questions, and suggestions. I’d love to hear from you!