Multithreaded programming Interview questions
In my opinion, this is a fabulous interview question - at least assuming (1) the candidate is expected to have deep knowledge of threading, and (2) the interviewer also has deep knowledge and is using the question to probe the candidate. It's always possible that the interviewer was looking for a specific, narrow answer, but a competent interviewer should be looking for the following:
- Ability to differentiate abstract concepts from concrete implementation. I throw this one in primarily as a meta-comment on some of the comments. No, it doesn't make sense to process a single list of words this way. However, the abstract concept of a pipeline of operations, which may span multiple machines of differing capabilities, is important.
- In my experience (nearly 30 years of distributed, multi-process, and multi-threaded applications), distributing the work is not the hard part. Gathering the results and coordinating independent processes are where most threading bugs occur (again, in my experience). By distilling the problem down to a simple chain, the interviewer can see how well the candidate thinks about coordination. Plus, the interviewer has the opportunity to ask all sorts of follow-on questions, such as "OK, what if each thread has to send its word to another thread for reconstruction."
- Does the candidate think about how the processor's memory model might affect implementation? If the results of one operation never get flushed from L1 cache, that's a bug even if there's no apparent concurrency.
- Does the candidate separate threading from application logic?
This last point is, in my opinion, the most important. Again, based on my experience, it becomes exponentially more difficult to debug threaded code if the threading is mixed with the application logic (just look at all the Swing questions over on SO for examples). I believe that the best multi-threaded code is written as self-contained single-threaded code, with clearly-defined handoffs.
With this in mind, my approach would be to give each thread two queues: one for input, one for output. The thread blocks while reading the input queue, takes the first word off of the string, and passes the remainder of the string to its output queue. Some of the features of this approach:
- The application code is responsible for reading a queue, doing something to the data, and writing the queue. It doesn't care whether it is multi-threaded or not, or whether the queue is an in-memory queue on one machine or a TCP-based queue between machines that live on opposite sides of the world.
- Because the application code is written as-if single-threaded, it's testable in a deterministic manner without the need for a lot of scaffolding.
- During its phase of execution, the application code owns the string being processed. It doesn't have to care about synchronization with concurrently-executing threads.
That said, there are still a lot of grey areas that a competent interviewer can probe:
- "OK, but we're looking to see your knowledge of concurrency primitives; can you implement a blocking queue?" Your first answer, of course, should be that you'd use a pre-built blocking queue from your platform of choice. However, if you do understand threads, you can create a queue implementation in under a dozen lines of code, using whatever synchronization primitives your platform supports.
- "What if one step in the process takes a very long time?" You should think about whether you want a bounded or unbounded output queue, how you might handle errors, and effects on overall throughput if you have a delay.
- How to efficiently enqueue the source string. Not necessarily a problem if you're dealing with in-memory queues, but could be an issue if you're moving between machines. You might also explore read-only wrappers on top of an underlying immutable byte array.