Business Canvas Model

So I presented The Personal MBA book to Jose at the Universidad Externado, and he liked it and mentioned this other model:

Business Model Canvas is a strategic management and lean startup template for developing new or documenting existing business models. It is a visual chart with elements describing a firm’s or product’s value proposition, infrastructure, customers, and finances. It assists firms in aligning their activities by illustrating potential trade-offs.


He mentioned that it majorly boils down to having an unfair advantage: what makes this business, or this business model, better than the competition? (Because if nothing, it’s not worth pursuing.)

Cold Lead Generation

How do you find startups that need project work? I posed this question to my brother in a phone conversation the other day. I said to him, tech crunch / crunch base comes to mind, and he said that the top earners on upwork and odesk and elancer are a good list of potential clients.

From my point of view, the interesting companies are those that are in high tech and just got the money to spend on R&D (research and development). Notably, tech crunch publishes lists of such companies.

From my brother’s point of view, remembering the days when he was a freelancer, his analysis was as follows. The majority of projects on elancer (or any freelance site) are <$100, or $100-$500. However, top earners pick the >$500 projects. And those projects are likely to run for months or years. And the clients who need such projects, are the interesting clients.

There is also cold calling, of course. Grab a yellow pages book and start looking at each company, one by one. I feel that I should try doing something like this, just to get experience with sales.

And with that, I saw a whole lot of companies to reach out to. So I think it’s not a tremendous problem finding the companies that need the services that we offer. It’s more important to maintain a good presence (online and in-person), and most importantly, offer relevant & competitive services.

Here are some lists of companies that may need tech, just for reference:

Lunch and learn – Topics in Scaling Web Applications – From tens to hundreds concurrency

Topics in Scaling Web Applications

From tens to hundreds concurrency

About the Author

Victor Piousbox acts in a capacity of a senior-level full-stack software engineer. He leverages his overall development experience of 8 years to recommend and implement non-trivial technical solutions. He likes to find and address performance bottlenecks in applications. He works hard on being able to recommend the best tools and the right approach to a challenge.


In this episode we’ll talk about the particular challenges we faced in february 2017 at Operaevent, when we were addressing resilience, performance, and scalability of our infrastructure.


We have a hybrid stack that makes heavy use of Ruby and Javascript. We are in N-tier architecture, with ReSTful and socket communication between the back- and frontends. The storage is mongodb for persistent storage, redis for in-memory storage, s3 for file storage, and caching is on-disk.

The frontends are: the chatbot interface (implementing IRC), a jQuery-heavy web UI, a React.js chrome extension, and the jQuery-heavy OBS layer.

The middle tier is: the ruby API, node.js socket emitter, a number of ruby services, a number of background and periodic workers


After reaching some usage threshold, our services, particularly the chatbot interface, started crashing a lot. The worst of it was when it would consistently go off-line at night, off business hours, when nobody is in the office to fix or at least restart it. This would happen every night for several weeks, at the most inconvenient of times: at 4am or around midnight. It was critical for us to start guaranteeing much better uptime, in order for our service to be usable.


We spent a non-trivial amount of effort troubleshooting the issues. We would find a bottleneck in performance, and address it. This allowed us to seek the next bottleneck, after addressing which, we would be well-positioned to seek the next one. With this iterative approach, we implemented tens of changes, the end result of which process was gaining resilience of our application. When we were done with the process, our application became quite resilient and not falling over at all. We don’t have exact metrics on stability, but it was well within the requirements to consider our services stable.

The first step we did was take a look at the logs. Apache logs, application logs (each service has a log), error and access logs.

Additionally, we implemented services that collected metrics that we were interested in. So we collected custom logs on the performance of our boxes.

We installed a number of monitoring agents. mongo monitoring agent was introduced.

There was a particular error message in the logs that preceded downtime. We built a simple stress test that could actually reproduce the exact error, on a small scale. The error was “unable to get a db connection within {timeout} seconds.” Once the error was consistently reproducible, it was much easier to find the exact numbers and exact configuration parameters that was causing it. We increased the number of db connections in the pool of the application, as well as adjusted the timeout interval to a sensible value, to address this bottleneck.

Next was the error having to do with file descriptors: the kernel would complain that there are too many file descriptors open, and we would experiece downtime then.

The basic change to fix this was increasing the number of file descriptors that a process can hold on to at any time. This is per-user, as well as systemwide.

It took us a while to discover that upstart, the service manager of ubuntu 14, does not honor ulimit settings. This is so because, ulimit settings are per-session, and services aren’t run in sessions. upstart has its own mechanism for defining those limits. Furthermore, the number of filedescriptors can be set on system level, which is what we did at the end.

In addition to increasing the limits on open file descriptiors, we separated the services into individual users. At this time, each service is being run by its own user, as opposed to one user running all the services. This allows us better scaling and better separation of services.

The next step was a manual code review. We looked at what the code is doing, to see if any areas of it looked problematic. There were several safeguards, several checks that were computationally expensive. We refactored them in such a way that the check is either fast, or doesn’t happen as often, or happens at a later time, or happens in the background.

We looked at the database queries to see which take the most time. Unsurprizingly, there were some optimizations to be made there. We denormalized some data to reduce the number of queries executed for each chat message. Overall, we probably halfed the number of queries per chat message.

We implemented a watchdog on the service: if the service does not respond within a set time (60 seconds), we get a notification. We could make it so that the watchdog automatically restarts the service, but instead we opted to receive notification only, and restart manually as necessary.

We refactored the application to cleanly separate message sending and receiving from message processing. With the conversion to background workers for every chatbot command, we are better positioned to scale. We can increase the hardware resources we allocate to message processing, and not have duplicate messages. We can also failover message sending/receiving, without affecting message processing. This gives us the ability to scale each individual component as needed. Apart from configuration parameter tweaking, this was the cingle most important change that was introduced.

For sending and receiving messages, we converted from using a database to using an in-memory queue. From mongodb we went to redis. Additionally, we went from polling to callback architecture on that piece. Now instead of polling the database every second or two, we register a callback with redis that gets triggered on queue push. While I believe this did not directly affect resilience and stability, it did cause a noticeable performance improvement.

Findings and Changes

We implemented about a dozen changes, with the cumulative result in that our infrastructure became stable.

Tools we used
  • log analysis, better log collection
  • more monitoring, custom monitoring and log collection
  • custom stress tests
  • code review and optimizations, db queries review and optimizations
  • more caching
  • moving storage in memory (redis)
  • converting timed polling to event callbacks
  • introducing a watchdog, better use of background workers
  • denormalization of data in the db
  • security settings tweaking, application configuration tweaking.

Planning Ahead

We can still separate the services further. At this time, a single virtual box can be runnig several services: it can be an API app at the same time as it is a websocket app. However, we anticipate that all the services will be separated out into individual boxes. Furthermore, we can cluster each service, and have several machines powering a cingle service. The architectural decisions we have made so far in this stack would accommodate that.

We can add utility boxes which do heavier data processing operations. One of the computationally-expensive things we do is report generation. It happens on production boxes right now. We can offload that work to utility boxes, and this way production boxes will not see a usage spike.

Elements of Corporate Culture

We are a small company with very little bureaucracy or politics going on. That said, we have some definite elements of corporate culture – we define our own corporate culture – that we like to follow to improve everyone’s productivity. Here is a simple list

  • Daily scrum meeting at 915am
  • Slack is our preferred method of communication, after face-to-face communication (our office workers are much more effective than the remote ones).
  • If you are late, announce it on the general chat.
  • We have the rotating Wizard status: it is a desk statuette that gets awarded temporarily to a member of the team for exceptional achievements.
  • We generally go out to eat as a team once every week.
  • We reserve environments during standup at the beginning of the day. We have the following environments: staging, production, dev1, dev2, dev3. We put on the dry erase board(s) who is working on which environment that day.

Branching strategies and github usage in our code

At Operaevent we have two branching strategies: one indeed is based on master, where master is the main stable branch and feature branches are being created and merged into master by all the developers. This is the case in `bounties-frontend` repo, where the main branch is master.

The other strategy is a variation of semantic versioning, where we have version in the format x.y.z (major.minor.patch), and 0.x.0 are stable branches where highest x is the latest code running in production. Particularly on `node` codebase we are on branch 0.5.0 right now, and at `gather-chrome` we’re on 0.1.0.

  • For now, our semantic versioning offers several advantages: (1) sorting branches alphabetically makes very good sense and we can handle dozens of branches without confusion, and (2) you always know what’s running in production and can fallback to an earlier version easily. Additionally, this methodology alleviates the need for creating release tags.

We have daily deliverables! This means that at the end of your day, you should commit your code and everything that you have written in the day, and preferably issue a pull request.

If you have not worked on a codebase for a while, branch off of the most recent branch (0.x.0) and pull request into it at the end of the day.

The work flow for our code repos, particularly gather-chrome, is as follows:

  • branch off of the most recent stable branch 0.x.0
  • pull request into it at the end of the day.
  • specs are optional right now, but will eventually be required. Specs now earn you bonus points.

While you are working on an issue, mark it as assigned to yourself in github issues. I encourage you to work on one issue at a time. Finish an issue, pull request it, and assign the next issue to yourself.

Tech’s gilded glory didn’t mean much to Trump’s supporters

Read full article here:

This, unfortunately, is happening now. NASDAQ is way down today, and the biggest bearish market movers are all in tech: Facebook and Alphabet and Apple. By itself this day wouldn’t and shouldn’t cause much concern, but the downward slope has continued since Trump won the election, so the concerning part is that the trend may continue for years. Bye-bye tech for the next half a decade. And since I am in technology, it especially concerns me.

It would probably we wise to diversify holdings into market segments other than technology. I recently purchased some stocks of a weapons manufacturer and it hasn’t been bad so far.

How to check a chef recipe with serverspec

I like test-driven development. I’m used to thinking that test-driving chef cookbooks is hard – until I failed an interview because I lacked knowledge of chefspec. So that same date I looked into chefspec and related tools (rubocop, food critic, and serverspec). Chefspec itself is easy. The good news is that it allows some sort of testing. The bad news is that it’s stubbed unit tests, not integration tests. Serverspec to the rescue!

Serverspec allows functional integration tests of the cookbooks. But how do you set it up? I have some experience with vagrant, but honestly I don’t use it extensively in personal development. I do however use virtualbox a lot (along with chef server).

In my /etc/hosts I have written out a basic local network like so:       localhost          piousbox-samsung       sentact.local      zend.local       pi.local           webdevzine.local     sedux.local       cities.local       api.local       sleeper.local      nagios.local       wasya_co.local     wasya_co2.local       bjjc.local         bjjc-angular.local   anything.local    ubuntu15-virgin    centos-virgin    ubuntu14-virgin    ubuntu-virgin   lb_10.ubuntu       lb_10_spec   bjjc_22.ubuntu15   bjjc_22.ubuntu   bjjc_23.ubuntu14   bjjc_23.ubuntu   spec_24.ubuntu14   spec_24.ubuntu   jenkins1.local     jenkins.local   jenkins2.local   jenkins3.local   petclinic.local   nexus.local   centos-virgin.local   jenkins.centos

The above local DNS is not necessary, but I have found it convenient to have.

Next, there is a repo that contains my chef server workstation and a number of other things. Although chef-spec of each recipe goes into that recipe, the serverspecs all go into spec/ folder of my chef workstation. So let’s say I am testing that a recipe installs ruby. Generate the server spec with `bundle exec serverspec-init`. In spec/spec_24.ubuntu/sample_spec.rb I have the following:

require 'spec_helper'

describe command("/usr/local/rbenv/shims/ruby --version") do
its(:stdout) { should match /2.0.0/ }

and in the for the repo I have the instructions to run it:

# vm_spec_24
# verifies ish::install_ruby
knife client delete vm_spec_24 -y ; \
knife node delete vm_spec_24 -y ; \
# sshpass -p "the_password" ssh oink@spec_24.ubuntu "echo the_password | sudo -S rm -rfv /etc/chef" ; \
VBoxManage controlvm "ubuntu14 spec_24" poweroff ; \
VBoxManage snapshot "ubuntu14 spec_24" restore "network ok" && \
VBoxManage startvm "ubuntu14 spec_24" --type headless && \
while ! ping -c1 spec_24.ubuntu &>/dev/null; do :; done ; \
knife bootstrap spec_24.ubuntu -N vm_spec_24 --ssh-user oink --ssh-password the_password -r "recipe[ish::install_ruby]" --sudo --use-sudo-password -y --environment vm_samsung && \
SUDO_PASSWORD=the_password TARGET_HOST=spec_24.ubuntu be rspec spec/spec_24.ubuntu

What the script does is:

  1. Deletes chef node and client
  2. (does not) delete the target machine’s chef identity – that’s the quick and dirty way, but the following step takes care of the same task cleaner.
  3. reverts the VM to a good clean configuration. Only the static network is configured.
  4. waits for the machine to boot
  5. bootstraps the machine with chef
  6. runs the serverspec

Voila! This answers my need for local semi-automatic testing using serverspec. Hope this helps! The disadvantage of this is that it doesn’t quite fit into CI/CD pipelines, but I will address that in a later post.