The Time Efficiency of an Individual in a Company

There is the thing about time efficiency. Even if I’m not full-time efficient, some of my hours in the day are efficient, and I can build and grow, even if slowly. It is a sign of professionalism that I can focus on a specific time hours at a time.

Then, there is the saturation of time, and here is where critical mass comes into play. The truly successful people (who technologically impact the world) are pretty effective in their use of time… and it’s not just the per-hour effectiveness, it’s the ability to hold these concepts in memory and in focus for a long time (days, months, years), and furthermore hold in focus non-trivial combinations of these things. Not just do one thing well, but integrate the work of others, and build a system which by definition is a varied mix of things (otherwise, it’s called a component, not a system).

There are some successes that are only achievable with a critical mass of focus. The truly successful people pack so much impact in their time (be it 8 hours a day or more), that they physically cause success to happen. And to do that, what makes it possible I think is a certain “continuous integration” of a person. Optimize the time at work, off work, optimize the little 20-minute pieces of time in between tasks, pack the tasks, and so forth, and eliminate inefficiencies everywhere. There is no distinction between on- and off-work time, everything is treated as on-work time. And the success of the company depends on how much punch the team (the team leaders and team individuals) back into the time, into each hour of operation of the company. That is my current view on personal efficiency at work.

– * – * – * –

Unfortunately here I cannot so much focus. I sleep in late (wake up at 10am), have to do a lot of walking to find food and places to study, have to constantly fight with sub-optimal weather, sun flare, and the rain. Further, I have to move every 2 weeks, unless I actually find an apartment. And there is drinking! In Silicon Valley I don’t drink on weekdays and that works out well. Here, I drink lightly, but it still costs me time and efficiency.

Further, my focus is divided into two equal parts: language immersion and engineering. And the engineering part is in English. When I do engineering, it moves me further from language immersion, and vise versa. So there is a certain conflict if my focus right now. It almost makes sense to do one thing for several days (study spanish for three days, without technology), and then switch to technology and not study spanish for a few days, thus avoiding context switching.

Of course, there is also the question of daily exhaustion limit. If I do programming for 6 hours, I’m done with programming, but the day may or may not be done. In that situation, it makes sense to switch to another context, say Spanish language, for 2-3 more hours. That may actually more effective than doing the same task for several days in the row.

There is also the obvious issue of a daily routine. It’s goot to perform some tasks daily, even if the results are not so great. If the same tasks are performed daily, day in day out, you will become good at them, *and* keep the ability to focus on all of them each day. So in my local example, my daily task variety is as follows:

  • 30 minutes of conversational spanish
  • some time studying vocabulary
  • some time studying the grammar book
  • some engineering
  • some content-creation, writing or photos or editing
  • daily cleaning: clean ears, teeth, keep everything clean.

20170509 Investment Diaries

Today, STRP (+6.64%) is going up, WINS (-7.5%) is going down, TSLA (+4.3%) is going up, TWTR (+0.66%) is up, LMT (+0.53%) is up, and AAPL (+0.82%) is up. We expect these trends to continue mid-long-term.

My friend is shorting SYMC (-0.42%). It’s worth watching, since he works as symantec, though I do not put my own confidence into this yet.

The markets are generally optimistic and calm.

Review of Topics in Business (PMBA)

In preparation for a business-related internship, I’m reviewing topics in micro- and marco-economics, and after that will review topics in business. I’d like to point out that my favourite book on MBA and business so far is, hands-down, the Personal MBA


Buy it, and read it twice. Take a look also at the list of books i recommend.

Similarly, the Harvard Business Review offers a lot of interesting articles on business, and is available here:

Water Harvester: This new solar-powered device can pull water straight from the desert air

So who is working on having this mass-produced? Obviously some organizations and maybe governments would be interested in seeing this generally available.


You can’t squeeze blood from a stone, but wringing water from the desert sky is now possible, thanks to a new spongelike device that uses sunlight to suck water vapor from air, even in low humidity. The device can produce nearly 3 liters of water per day for every kilogram of spongelike absorber it contains, and researchers say future versions will be even better. That means homes in the driest parts of the world could soon have a solar-powered appliance capable of delivering all the water they need, offering relief to billions of people.

The new water harvester is made of metal organic framework crystals pressed into a thin sheet of copper metal and placed between a solar absorber (above) and a condenser plate (below).

Wang Laboratory at MIT

There are an estimated 13 trillion liters of water floating in the atmosphere at any one time, equivalent to 10% of all of the freshwater in our planet’s lakes and rivers. Over the years, researchers have developed ways to grab a few trickles, such as using fine nets to wick water from fog banks, or power-hungry dehumidifiers to condense it out of the air. But both approaches require either very humid air or far too much electricity to be broadly useful.

To find an all-purpose solution, researchers led by Omar Yaghi, a chemist at the University of California, Berkeley, turned to a family of crystalline powders called metal organic frameworks, or MOFs. Yaghi developed the first MOFs—porous crystals that form continuous 3D networks—more than 20 years ago. The networks assemble in a Tinkertoy-like fashion from metal atoms that act as the hubs and sticklike organic compounds that link the hubs together. By choosing different metals and organics, chemists can dial in the properties of each MOF, controlling what gases bind to them, and how strongly they hold on.

Cold Lead Generation

How do you find startups that need project work? I posed this question to my brother in a phone conversation the other day. I said to him, tech crunch / crunch base comes to mind, and he said that the top earners on upwork and odesk and elancer are a good list of potential clients.

From my point of view, the interesting companies are those that are in high tech and just got the money to spend on R&D (research and development). Notably, tech crunch publishes lists of such companies.

From my brother’s point of view, remembering the days when he was a freelancer, his analysis was as follows. The majority of projects on elancer (or any freelance site) are <$100, or $100-$500. However, top earners pick the >$500 projects. And those projects are likely to run for months or years. And the clients who need such projects, are the interesting clients.

There is also cold calling, of course. Grab a yellow pages book and start looking at each company, one by one. I feel that I should try doing something like this, just to get experience with sales.

And with that, I saw a whole lot of companies to reach out to. So I think it’s not a tremendous problem finding the companies that need the services that we offer. It’s more important to maintain a good presence (online and in-person), and most importantly, offer relevant & competitive services.

Here are some lists of companies that may need tech, just for reference:

Lunch and learn – Topics in Scaling Web Applications – From tens to hundreds concurrency

Topics in Scaling Web Applications

From tens to hundreds concurrency

About the Author

Victor Piousbox acts in a capacity of a senior-level full-stack software engineer. He leverages his overall development experience of 8 years to recommend and implement non-trivial technical solutions. He likes to find and address performance bottlenecks in applications. He works hard on being able to recommend the best tools and the right approach to a challenge.


In this episode we’ll talk about the particular challenges we faced in february 2017 at Operaevent, when we were addressing resilience, performance, and scalability of our infrastructure.


We have a hybrid stack that makes heavy use of Ruby and Javascript. We are in N-tier architecture, with ReSTful and socket communication between the back- and frontends. The storage is mongodb for persistent storage, redis for in-memory storage, s3 for file storage, and caching is on-disk.

The frontends are: the chatbot interface (implementing IRC), a jQuery-heavy web UI, a React.js chrome extension, and the jQuery-heavy OBS layer.

The middle tier is: the ruby API, node.js socket emitter, a number of ruby services, a number of background and periodic workers


After reaching some usage threshold, our services, particularly the chatbot interface, started crashing a lot. The worst of it was when it would consistently go off-line at night, off business hours, when nobody is in the office to fix or at least restart it. This would happen every night for several weeks, at the most inconvenient of times: at 4am or around midnight. It was critical for us to start guaranteeing much better uptime, in order for our service to be usable.


We spent a non-trivial amount of effort troubleshooting the issues. We would find a bottleneck in performance, and address it. This allowed us to seek the next bottleneck, after addressing which, we would be well-positioned to seek the next one. With this iterative approach, we implemented tens of changes, the end result of which process was gaining resilience of our application. When we were done with the process, our application became quite resilient and not falling over at all. We don’t have exact metrics on stability, but it was well within the requirements to consider our services stable.

The first step we did was take a look at the logs. Apache logs, application logs (each service has a log), error and access logs.

Additionally, we implemented services that collected metrics that we were interested in. So we collected custom logs on the performance of our boxes.

We installed a number of monitoring agents. mongo monitoring agent was introduced.

There was a particular error message in the logs that preceded downtime. We built a simple stress test that could actually reproduce the exact error, on a small scale. The error was “unable to get a db connection within {timeout} seconds.” Once the error was consistently reproducible, it was much easier to find the exact numbers and exact configuration parameters that was causing it. We increased the number of db connections in the pool of the application, as well as adjusted the timeout interval to a sensible value, to address this bottleneck.

Next was the error having to do with file descriptors: the kernel would complain that there are too many file descriptors open, and we would experiece downtime then.

The basic change to fix this was increasing the number of file descriptors that a process can hold on to at any time. This is per-user, as well as systemwide.

It took us a while to discover that upstart, the service manager of ubuntu 14, does not honor ulimit settings. This is so because, ulimit settings are per-session, and services aren’t run in sessions. upstart has its own mechanism for defining those limits. Furthermore, the number of filedescriptors can be set on system level, which is what we did at the end.

In addition to increasing the limits on open file descriptiors, we separated the services into individual users. At this time, each service is being run by its own user, as opposed to one user running all the services. This allows us better scaling and better separation of services.

The next step was a manual code review. We looked at what the code is doing, to see if any areas of it looked problematic. There were several safeguards, several checks that were computationally expensive. We refactored them in such a way that the check is either fast, or doesn’t happen as often, or happens at a later time, or happens in the background.

We looked at the database queries to see which take the most time. Unsurprizingly, there were some optimizations to be made there. We denormalized some data to reduce the number of queries executed for each chat message. Overall, we probably halfed the number of queries per chat message.

We implemented a watchdog on the service: if the service does not respond within a set time (60 seconds), we get a notification. We could make it so that the watchdog automatically restarts the service, but instead we opted to receive notification only, and restart manually as necessary.

We refactored the application to cleanly separate message sending and receiving from message processing. With the conversion to background workers for every chatbot command, we are better positioned to scale. We can increase the hardware resources we allocate to message processing, and not have duplicate messages. We can also failover message sending/receiving, without affecting message processing. This gives us the ability to scale each individual component as needed. Apart from configuration parameter tweaking, this was the cingle most important change that was introduced.

For sending and receiving messages, we converted from using a database to using an in-memory queue. From mongodb we went to redis. Additionally, we went from polling to callback architecture on that piece. Now instead of polling the database every second or two, we register a callback with redis that gets triggered on queue push. While I believe this did not directly affect resilience and stability, it did cause a noticeable performance improvement.

Findings and Changes

We implemented about a dozen changes, with the cumulative result in that our infrastructure became stable.

Tools we used
  • log analysis, better log collection
  • more monitoring, custom monitoring and log collection
  • custom stress tests
  • code review and optimizations, db queries review and optimizations
  • more caching
  • moving storage in memory (redis)
  • converting timed polling to event callbacks
  • introducing a watchdog, better use of background workers
  • denormalization of data in the db
  • security settings tweaking, application configuration tweaking.

Planning Ahead

We can still separate the services further. At this time, a single virtual box can be runnig several services: it can be an API app at the same time as it is a websocket app. However, we anticipate that all the services will be separated out into individual boxes. Furthermore, we can cluster each service, and have several machines powering a cingle service. The architectural decisions we have made so far in this stack would accommodate that.

We can add utility boxes which do heavier data processing operations. One of the computationally-expensive things we do is report generation. It happens on production boxes right now. We can offload that work to utility boxes, and this way production boxes will not see a usage spike.

Elements of Corporate Culture

We are a small company with very little bureaucracy or politics going on. That said, we have some definite elements of corporate culture – we define our own corporate culture – that we like to follow to improve everyone’s productivity. Here is a simple list

  • Daily scrum meeting at 915am
  • Slack is our preferred method of communication, after face-to-face communication (our office workers are much more effective than the remote ones).
  • If you are late, announce it on the general chat.
  • We have the rotating Wizard status: it is a desk statuette that gets awarded temporarily to a member of the team for exceptional achievements.
  • We generally go out to eat as a team once every week.
  • We reserve environments during standup at the beginning of the day. We have the following environments: staging, production, dev1, dev2, dev3. We put on the dry erase board(s) who is working on which environment that day.

Branching strategies and github usage in our code

At Operaevent we have two branching strategies: one indeed is based on master, where master is the main stable branch and feature branches are being created and merged into master by all the developers. This is the case in `bounties-frontend` repo, where the main branch is master.

The other strategy is a variation of semantic versioning, where we have version in the format x.y.z (major.minor.patch), and 0.x.0 are stable branches where highest x is the latest code running in production. Particularly on `node` codebase we are on branch 0.5.0 right now, and at `gather-chrome` we’re on 0.1.0.

  • For now, our semantic versioning offers several advantages: (1) sorting branches alphabetically makes very good sense and we can handle dozens of branches without confusion, and (2) you always know what’s running in production and can fallback to an earlier version easily. Additionally, this methodology alleviates the need for creating release tags.

We have daily deliverables! This means that at the end of your day, you should commit your code and everything that you have written in the day, and preferably issue a pull request.

If you have not worked on a codebase for a while, branch off of the most recent branch (0.x.0) and pull request into it at the end of the day.

The work flow for our code repos, particularly gather-chrome, is as follows:

  • branch off of the most recent stable branch 0.x.0
  • pull request into it at the end of the day.
  • specs are optional right now, but will eventually be required. Specs now earn you bonus points.

While you are working on an issue, mark it as assigned to yourself in github issues. I encourage you to work on one issue at a time. Finish an issue, pull request it, and assign the next issue to yourself.

Tech’s gilded glory didn’t mean much to Trump’s supporters

Read full article here:

This, unfortunately, is happening now. NASDAQ is way down today, and the biggest bearish market movers are all in tech: Facebook and Alphabet and Apple. By itself this day wouldn’t and shouldn’t cause much concern, but the downward slope has continued since Trump won the election, so the concerning part is that the trend may continue for years. Bye-bye tech for the next half a decade. And since I am in technology, it especially concerns me.

It would probably we wise to diversify holdings into market segments other than technology. I recently purchased some stocks of a weapons manufacturer and it hasn’t been bad so far.