I never seem to find a quick good answer for this.
Maybe I just almost never work on REAL hard things.
So my question to you, HNers, is :
What is the hardest technical problem YOU have run into?
I am really interested to know what you would consider 'hardest'.. It's probably not going to be something like 'I changed the css property value from "display: block" to "display: inline-block"..'
Last year, I came up with a solution to a problem that we'd been solving sub-optimally for years. My solution is arguably optimal (given a certain set of assumptions) and requires multiple orders of magnitude less code than the previous solution. The solution is the core part of a paper that was recently accepted to a top conference in its field. That sounds like it might be good evidence the problem was a hard problem, but in fact the solution just involved writing down a formula that anyone who was exposed to probability in high school could have written down, if it had occurred to them that the problem could be phrased as a probability problem (that is, the solution involved multiplying a few probabilities and then putting that in a loop). When I described the idea to a co-worker (that you could calculate the exact probability of something, given some mildly unrealistic but not completely bogus assumptions), he immediately worked out the exact same solution. It wasn't an objectively hard problem because it's a problem many people could solve if you posed the problem to them. The hardest part was probably having the time to look at it instead of looking at some other part of the system, which isn't a hard "technical problem".
Another kind of problem I've solved is doing months of boring work in order to produce something that's (IMO) pretty useful. No individual problem is hard. It's arguably hard to do tedious work day in and day out for months at a time, but I don't think people would call that "technically hard".
I'm with you. Maybe I never work on "REAL hard things"? How would I tell?
We build tools that read and write Excel files (open source library: https://github.com/sheetjs/js-xlsx) There are plenty of very difficult problems involving ill-specified aspects of the various file formats and errors in specifications, but it is largely a matter of grinding and finding files in the wild that capture the behavior you want to understand. Those are "difficult" in the sense that people still get these things wrong (related: recently a bug in the Oracle SmartView corrupted US Census XLS exports, which boiled down to an issue in calculating string lengths with special characters) but they don't feel difficult since most of the work didn't involve any really clever insights.
IMHO the hardest problem is now fairly straightforward: How do you enable people to test against confidential files? The solution involves running the entire process in the web browser using FileReader API: https://developer.mozilla.org/en-US/docs/Web/API/FileReader , and that is an obvious technical solution in 2017 but few thought it was even possible when we started.
Not being able to answer this question is just a telling as an answer itself.
So do what polititions do -- answer the question you wished they had asked instead of what they actually asked.
In my case, at most places I've worked I end up being one of the go-to people for gnarly bugs that have stumped the regular crew. So part of my interview prep is to condense a war story into something short and coherent that illustrates why people should have faith in my intuition, a bit of a tough sell. Then during the interview I latch onto any semi-related question and tell my rehearsed story.
-- like the story of when you saved 187 million dollars by fixing a totally trivial bug https://thehftguy.com/2017/04/04/the-187-million-dollars-gma...
"Well I'm not sure I can pick just one as the "hardest", but one very interesting problem that ended with an elegant solution was ..."
And you fill in the ... with a tale of you slaying a dragon^W prod issue with just your wits and a default .vimrc.
I think much of the reason for that is that most software projects that deliver business value involve plugging together a bunch of components to deliver functionality that is not particularly complex. It doesn't involve pushing the limits of your datasources or inventing new algorithms. If performance problems come up, it's almost always cheaper to throw money at AWS or more hardware than to spend a couple developer-months addressing the bottleneck in the application. In some ways, I guess that's efficient from the perspective of the market, but it's disappointing for engineers who like to build applications that require solving hard problems.
Personally, many times I answer with a time where I decided to re-engineer and rewrite a "snapping engine". It helped with snapping boxes together when they are close to each other in a 2d design application. Unexpectedly difficult to write with some features we wanted, but after a couple of iterations, I finished, and since then, new features and plugins worked perfectly and nicely together, and were easy to add and implement.
Sorry, I don't.
The story I would tell if asked this was solved by the guy I was pairing with. He knew about URL encoding images, which immediately solved something we could have worked for weeks on. I was very surprised and impressed. Of course, now that is part of my toolbox, and I wouldn't think much of solving something else this way.
Sometimes I solve problems easily that others find very hard. I'm glad I could help, but I don't go around feeling proud of how awesome I did that day. I just happened to know something others didn't yet.
This might sound like humblebragging, and perhaps it is. Just trying to explain why I have a hard time with this question.
If it's something too simple, you're going to be looked down on. If it's a clever hack around someone's bug, it's hard to really be proud of something that shouldn't have had to exist in the first place. If I say something from a long time ago, I may not remember enough details to answer follow-up questions. If my job is boring (hence interviewing for a new one), I may not have had any good "shining moments" recently.
As time goes on, stuff that used to seem or look cool can become embarrassing. I've seriously considered deleting some of the early stuff I have on Github even though it has relatively-a-lot-of-stars for something small and stupid.
Asking to be regaled by stories of tech heroism is also prone to sabotage, because it's easy to rehearse an impressive story. It doesn't necessarily indicate their ability to do things that are useful for the job; it just means they rehearsed a good story and prepared for some follow-ups specifically related to that.
In an interview, you're a lot better off asking questions that will require the respondent to formulate an answer right then and there v. something that they've rehearsed. You're also better off leaving the expectations of tech heroism behind.
"Rock star" job listings have more or less died out, but this is really just a lesser form of it. Typically you don't need or want a rock star. You want someone whose output is professional and consistent.
>I never seem to find a quick good answer for this.
Real easy: Overcoming technical debt/bad decisions of the previous group of programmers.
At my current company/position, our group basically replaced an outside company - two programmers. You name something you should do and they did it: Code in the behind, logic in triggers, plain text passwords, direct database access - bobby tables all the way down, etc.
When they were in charge, the company had ~4 customers... we are now rocking ~30 unique customers. Their fragmented codebase is unmanagable.
Keeping a train moving while replacing the engine and changing the wheels would be easier.
This doesn't include company culture, inter-company politics, other decisions, etc.
Every developer dreams of going greenfield. Ultimately, that's because it's harder and much more tedious to read code than to write it. If you start from scratch, you understand the whole stack/platform, everything is customized to your liking, and so on. That's great for you, but the company is usually stuck spinning its wheels for months while you push this rewrite down their throats.
It's also very easy to underestimate the depth of domain knowledge and accounted-for corner cases encoded in an old codebase. It looks easy at first, but it usually ends up taking at least months to reach feature parity with the old software, which usually also means that people will use both systems simultaneously, requiring data synchronization, etc.
The whole thing becomes messy, and by the time you're done, the "new system" usually isn't really all that improved over the old system. Systems get convoluted in the process of development, business needs demand quick shoehorning of something instead of thorough refactoring, etc.
Once in a while, a full rewrite is indeed justified, but it's much rarer than most people think.
Going in saying "Yes, my company needed a full rewrite" is an instant orange flag in my book, and thorough questioning would be needed to determine if this is an ongoing attitude problem where there's a reluctance/reticence to read other peoples' code. That portends laziness, a disrespect for colleagues, and a disrespect for the business's needs, which are rarely aligned with tying its developer labor up in a greenfield reimplementation.
This is often a gold mine, just make sure your interview doesn't become a discussion about how bad other programmers are.
The scope of the project is the size and ambiguity level. Ideally as you get more experience your scope grows. Whenever you're coming out of school, your answer to this question might be a tricky bug fix but after a few years it might be something like "we needed to build a system to flag and filter fraudulent users based on their site activity."
Depth is about how much detail you can talk about the project in. If you choose a project with a big scope, can you drill down and talk about the implementation details of each component? If you chose a bug fix, can you describe exactly what triggered the bug as opposed to just knowing what fixed it?
For originality, what about the problem made it non-trivial to solve with out of the box tools? For the fraud case above, maybe the data was stored in a format that was hard to analyze. Or maybe for people at the bigger companies there were scaling issues that requipped unique solutions. For bug fixing, maybe it was a bug that was really hard to reproduce and you had to do a lot of memory dumps and code analysis to pinpoint it.
When I finish something I like to think about it along those three axes for a little bit in case I need to recall details later.
The hardest technical problems I've run into, have been mostly human; i.e. other people.
But, in the purest sense, I have to say that I have observed, on reflection, that the reason I am a technologically competent, adept, person, making a living by way of dark and serious mystery, is that I long ago decided that nothing would be hard. Just .. un-learned.
You see, it is a key factor of success that you, literally and otherwise, embrace the idea that you can't know everything.
So, know what you need. The hard things become easier the moment you do it, even the first time.
I know this sounds like compound nonsense, but I honestly had to give pause on this question. I'm a systems engineer with decades of experience in a multi-variate set of industrial categories, and relatively successful in my lot. This question made me really think - I couldn't think of the hardest things.
The hardest things, I haven't done yet. {But, on another thread, I'm serious about people being the hardest things about technology..}
When I was a young, wet behind the ears, Java developer I answered telling them about making a modification to a Linux kernel driver for hardware support. It was a telephone interview but the silence was deafening. Still the only interview I ever had where I wasn't offered the job.
Some things haven't changed in that it is when I step outside my comfort zone I find the technical problems harder. But now I'd just talk about a more comfortable problem that went through multiple rounds of better fit solutions on a system actually in Java so they can relate and see I can actually talk about the target language. Then I'd probably make the point that as a more senior developer it's usually the non-technical problems that require my most focus.
Still makes me cringe thinking about it.
Probably because you are in a much better place now.
I have found that the propensity to lie is directly in proportion to one's [for the lack of a better word,] desperation. The less desperate I am, more ideology I tend to exhibit.
As a recent example, in my game engine I copy/pasted some code for framebuffer and texture creation and missed renaming one variable. A stupid mistake that took me 2 days to find. But to solve it, I needed to look at all of the various textures on-screen. Some of them are non-linear, some are single component (just red) which doesn't display well, so I ended up writing a method that allowed me to render all of the various stages of my renderer out to the screen (color, shadow, light, depth, normals, etc.) as a debug method. Only then did I realize that the shadow buffer texture was sized to width * width instead of width * height. Again, a stupid mistake, but now we've got something to dig into a talk about and it's much more about the solution than the problem.
Was going to buy the calculations in as an API because it was an opaque government standard, API turned out to be incomplete after we bought it, rang them up to ask why "oh we are getting out of that side of the business".
I had two weeks to build out an API (over Christmas) that implemented a government calculation that was implemented in one 200 page PDF[1] and then modified in another two, total calculation had 44 individual steps referring to several dozen data tables some with hundreds of values.
I did it with a day to spare.
It was probably the single greatest pure technical programming I've done in my career.
[1] https://www.bre.co.uk/filelibrary/SAP/2012/SAP-2012_9-92.pdf
The answer I used to use was a problem I had working as an R&D intern: determine when the speed limits posted on a street have changed from measurements of driver behavior. Interesting and fairly tricky ML problem (weather is a big confounder). Ended up writing a lot of C to get high enough performance to make the solution reasonable which was educational (I didn't know a lot of C at the time), but almost certainly not the right approach to the performance problem. Still more science than development, so it depends on who's asking.
Probably the hardest business-type technical problem I've encountered is database restructuring. We moved (a subset of our data) from a NoSQL database to SQL as part of larger architectural changes, and mapping, migrating, and maintaining compatibility has been non-trivial.
The hardest problem I've encountered has been helping to rescue a project with a severely dysfunctional development history. Much more project management and people than technical (it was just a CRUD app) but I came into a project that had been in development for a year or so and stalled out. The development was outsourced and I fell into a position as a liaison between the internal folks at the university that wanted the product and the dev team that had been hired to build it. Sort of a classic issue where the dev team and the stakeholders would talk right past one another. It drove me crazy at the time, but an excellent experience in retrospect. And it has a happy ending; the project went on to be successful after that, at least when I last heard.
Did you see that post a few days ago about "Is ECC RAM worth it?"
The answer, after my hellish debugging is an unequivocal YES! My horrible problem would have either manifested itself as a correctable ECC error or I would have gotten an uncorrectable ECC exception. I would have been able to go straight to hardware engineering with that instead of spending many miserable nights debugging an RTOS and ISRs.
* GPU drivers are a buffet of terrible things. My best moment was either hand-compiling shaders to GPU-specific assembly in order to implement video playback filters, or deducing how the GPU vendor's drivers managed to fake a particular GL extension and implementing that same fake trick in the MesaGL version of the driver.
* Self-applicable partial evaluators are cool. I've tried several times to build one, and each time I fall short.
* I've hand-written parsers for big languages. I've also written parser generators. I'm not sure which is harder.
* Fighting with motherfucking BitBake. You have no fucking idea.
On multiple occasions, I've kicked off BitBake to run overnight. I come in to find it failing from running out of disk space. And I'm usually perplexed - does this really need over 200 GB of space!?!
Debugging memory leaks in a Python 2.7 asynchronous (gevent) daemon.
Aside from memory leaks supposedly being improbable at worst in Python's reference counting managed runtime...the GC interface and STDLIB tools for such debugging are anemic in Python2 (improvements have been made in 3 although I can't comment on them since I haven't used them yet). Not to mention that C extensions (gevent is just one) add complexity to debugging.
1) One problem is harder than the other if it requires more knowledge. E.g. to code AI you need to have programming skills, AI related skills, statistics skills and graph theory skills, plus whatever your domain knowledge is (e.g. how to build the code in your company's environment).
2) One problem is harder than the other if it requires more skills.
3) [...] harder if it requires a higher composition level of skills. E.g. configuring a firewall via iptables is harder than configuring a firewall via your router's web gui, since the first requires bash, Linux, tcp/ip related skills as a foundation to even understand what iptables does. The gui may only require a limited set of networking skills and 2 pages of router handbook.
4)[...] harder if it is more complex. Coding your own kernel is harder than coding your own calculator.
5)[...] harder if it requires more departments. "Go to market" of your product therefore is a harder task than "proof of concept".
6)[...] harder if it relies on more legacy code. Legacy code always contains domain knowledge that is unaware to most people, even to the developers. Changing that code or its environment yields a lot of surprises.
How I'd debug these (it took me a while to be effective in this regard):
- Main tool was the AIX kernel debugger (like cutting bone with a butter knife :)
- Identify corrupted memory, look for clues like recognizable data structures or pointers in the raw dump that could be cross-checked against symbol maps, etc.
- Confirm the alignment of the corrupted memory. Page alignment was a tell-tale sign of errant DMA writes in our system... cache alignment is more mysterious and can be related to CPU design bugs (IBM designs their own POWER processors, and we'd test on alpha hardware frequently).
- Scour the voluminous kernel trace for the physical frame # of the corrupted memory. A typical offending sequence was:
1. Frame assigned to adapter for DMA
2. Physical memory layout change (we supported live hot-swappable memory arbitrated by the POWER hypervisor)
3. Frame allocated for use by page fault handler
4. Crash happens
Sometimes the root cause was that the device drivers were not properly serialized with the dynamic memory resource subsystem (the hot-swappable memory) and the sequence above happens very quickly (<1 ms). Sometimes the bug took a while to manifest, and the nice story tols above for our page was interspersed with thousands of unrelated activities in the same region of memory.We had to be like a prosecutor and build a strong case to implicate a bug somewhere else. Until then, our team was always on the hook to figure these out.
This class of problem was hard because the tools we have at our disposal to collect evidence were quite inadequate, and the amount of data to sift through was enormous. Also, any tool we think might help to sift through all this data needed to already be in the system and in the kernel debugger as a diagnostic command (a crashed system in the debugger cannot be modified in practice). There's hundreds of those debugger commands for all kinds of randomly recurring problems we had trouble figuring out. Over time, you'd build your own for your own set of problems in your kernel specialty :-)
This one took many many tries of various incantations and variations to discover (documentation was "less than useful") http://blog.outerthoughts.com/2011/01/bulk-processing-lotus-...
This one makes for a nice story when I talk about computer-specific language issues: http://blog.outerthoughts.com/2010/08/arabic-numerals-non-wy...
I'd love for someone to tell me a story about something they couldn't solve (or at least not the way they wanted to).
If they can't come up with something, which is rare, I ask them to tell me about something that was fun for them.
I was in twelfth grade. I was given some EEPROMs which I had to write data to, just that I did not had the standard equipment to write to it. I used a printer port to drive an amplifier circuit I built, which in turn sent the voltages to the EEPROM. I sent waveforms exactly the way the data-sheet suggested. Yet, I wasn't able to read back what I was writing.
I had no oscilloscope or waveform analyzer to debug. All I could do was to re-read the data-sheet and then my program for correctness.
I could never figure why wasn't it working.
Later, my Dad found someone who did have the company-supplied EEPROM writing equipment and took the EEPROM to them. He learned that there was just data on the first few locations on it.
This is one of the very few projects where I have failed. Being in twelfth grade then, doing stuff that would fail college grads, I have not taken an offense with myself. :-)
And you realize you've done about the same, fully finished and shipped, in about 3 weeks.
The rest of the interview is wondering whether you should cry or he should.
The easiest solution was to use transforms to force rendering through the GPU render pipeline by adding a Z-depth to the elements.
Which caused rendering issues in rendered font-weight for Firefox. We never resolved the issue, even after a root cause analysis showing the bug in Webkit and not Blink or Gecko.
On the backend, it was finding a way to store a persistent collaborative changelog with proper access control and heirarchy on top of a RDBMS. Resorted to redesigning a distributed file-system based on HFS+ and btrfs for COW and COR obligations. This is one of the most data-structure and depth of infrastructural knowledge problems I had to address.
I think this question relates to personal growth and overcoming show-stopping obstacles with retrospective analysis? Something something smart-person-speak.
Generally, when you actively work on weird bugs and try to really understand what's going on, instead of doing quick hacky workaround, sooner or later you'll face some interesting bug. But it's sometimes exhausting to investigate stuff like that, plus most of the reasonable managers will try to prevent you from going down the rabbit hole if the bug takes too long to fix.
Now, I can usually think of three decent ways to do anything. Nothing really feels "hard", it's just a different amount of work.
Another angle is that the way to solve "hard" problems is finding a way to think about it that makes it easy. Once I've done that, I no longer think of the problem as "hard".
I think the real issue here, that I don't fully understand, is what interviewers are really asking with that question? What do they want to hear?
Interviewers are being lazy with that question, essentially. They're saying "Wow me so that I can know you're the most impressive."
This is a problem if you don't think of interviews as a competition over who's the most sparkly (also, who's the best storyteller and/or who had the best script).
My experience is that people are shockingly bad at interviewing. They throw all the work onto the candidate and expect to get good hires that way, which is rarely successful.
I find this easier because usually hearing the interviewer talking about things will trigger my memory as to when I was working on similar problems. It's probably better for them to know a relevant example anyway.
now 4g languages let you do anything easy, so nobody really put thought in anything really. result: everything was soft code and the database grew to around 4 thousand tables. the database itself wasn't even that big, running at around 10gb.
The sheer number of tables made impossible to use an orm layer, because back in the day Hibernate and the others had no other option but to map everything at startup time from xml files or annotation and have all the metadata about tables and relationships loaded in memory. Just the metadata was using about 5gb of memory.
However as part of the migration we managed to build all the UI straight from the 4gl definition, so we really really needed a way to create queries out of the UI metadata using object introspection.
We ended up writing our own object query language and the translation layer to build SQL queries out of it. It sounds bad but in the end wan't impossible even for a small 3 man team - we needed not to support the full spectrum of possible way to interact, only what the UI needed to load the data (and yes this was a thick client)
You example of changing block to inline-block can very well take time and effort depending on the issue at hands. So yeah - this is very vague question in my opinion.
That's why we can't look back at something as "hard". Or maybe it's not. It's a good time to read that book again.
- Consulting for a customer where they were deploying to new hardware with a new processor architecture, I received a report that an application was running slower on the new servers than it was on the old ones. I started out looking at things with strace and ltrace, had to move deeper and pull out perf and systemtap, but found that it looked like memory access was slower than on the old hardware. I did research on the processor, and found that it was due to the 'Intel Scalable Memory Buffers'. Since memory first had to be loaded into the buffer before the CPU could access it, things not in the buffer already had higher latency, but things already in the buffer were much more quickly accessed than they would have been previously. I worked with the developers to make up for this performance decrease in other ways. Their application was well suited for using hugepages, but they were not, and TLB pressure was causing performance bottlenecks in other areas. Switching to hugepages prevented TLB pressure, and the application ended up being even more performant on the new platform due to the increased amount of available memory allowing for a large amount of hugepage allocations.
- I was consulting for a customer that was running instances on a xen platform. They were having performance issues vs. their old bare metal deployment, and had already done some analysis. They gave me a perf report that was showing a massive amount of time being spent with a specific xen hypercall. I had to dig into the xen source code to figure out exactly what that hypercall was doing, as general public documentation about it was somewhat vague. I was able to determine that it bundled up a bunch of different operations, so it wasn't conclusive from that, but it did narrow down the possibilities. It was enough to point me in the right direction, however, and I was able to determine with a little bit of trial and error with some tweaking that it was ultimately related to decisions NUMA was making. It turned out that the customer had thought they were doing NUMA node pinning, and ultimately weren't. Interestingly enough, even with pinning, we still saw some of this, and completely disabling NUMA (all the way - not just balancing) actually ended up being needed to fully reclaim the lost performance. I also learned an important lesson in trusting customers - even the ones that know what they're doing aren't always right, and while I should trust them in general, verifying their answers is important. I discounted investigating NUMA as early on they told me they had their applications pinned to nodes, and I would have otherwise investigated that more quickly and probably solved the issue in less time.
Eventually I just gave up.
Layer 8, ie. human beings. The software side of stuff, I can eventually solve by hammering at the keyboard until it works. But the people using it, and the ever-changing requirements they have - especially since this influences my software design - is definitely the hardest part.
However it just occurred to me that maybe the hardest problem I've had was actually making up an architecture from scratch as the problem was unfolding itself, and then having to maintain it and even bring others aboard. Meaning I had to document as much as I could (even though I had very little time for this) and I also had to sometimes give more priority to a not-so-important bug (vs a very pressing issue for me), not because it was critical to any feature but because it was making it very painful and hard for a teammate to implement one which in turn would later delay some other feature.
And the major reason why there was no actual planning to avoid this as much as possible, was because features were being decided on the go by the top brass on a case by case basis, completely opposite of the original direction I was told we were going to go (which was the information I used to lay down the foundations of the project). I.e. I was told at first that this was going to be just a wrapper script and it ended up being a whole orchestrator including multi-node operations needing result consolidation, a state machine to track down the... uhmm...state of the system, and things like that.
So my point is that probably there are several axes of "hardness" in a problem that can be mixed together, and that makes it difficult to compare a problem to another (i.e. over which combination of axes are you comparing one to another?). I guess part of the response to such a question in an interview would be then to explain the context so that it can be more easily understood why was that problem perceived as hard and over which axes. Was it because the problem was an optimization one and the previous code was impossible to work with? was it because the business constrains (as I believe was my case) where surreal? was it because the teammates made it really hard to move forward (e.g. bureaucracy, defensive/aggressive coworkers, etc)?
And I know we are talking about "technical problems" but I find it increasingly hard (as my career advances) to make a distinction between what is and what is not a technical problem. If the business constrains dictate certain sub-optimal solution must be developed, and that in turn causes technical issues, was that a technical problem? if a teammate is disruptive and introduces sub-par code that later causes bugs that need to be immediately addressed now, was that a technical problem?
In my mind they probably all are to some degree just by virtue of in the end influencing whatever technical decisions are being made. So maybe that could be part of the answer? asking about what specific sense are you referring to when you (the interviewer) ask me about the hardest technical problem.
I've got two answers that I would probably consider.
#1: debugging what ended up being a hardware problem. I was working on a device with a microcontroller and it had a sleep mode where the micro would program an RTC, shut itself off and the RTC would trigger the board's wakeup circuit when its alarm fired. I'd already told the board designer of two or three hardware bugs that somehow (surprise!) turned out to be my software bugs. So this time I was a little more cautious. There was a more senior software engineer working with me, and he told me to check the schematic. I looked at the processor manual and the board schematic, and followed the traces to make sure I was doing it right. And I just couldn't find out what was wrong. So the senior sw eng said, "well, ok, if you're sure, then just probe the RTC pin with a scope." Wow. A o-scope. WTF is this gloriousness? So I got to learn a bunch about how to go from the board schematic to the board layout, how to probe, what all the stuff on the scope was about. Sure enough, the RTC alarm went off on schedule but the trace showed some funny stuff that indicated that there was a design error in the board somewhere (I didn't understand the details, but IIRC a cut-and-jump of the prototype made the bug go away).
Motto: It's never a hardware design bug. Until it is.
#2: This bug I learned a good amount from. I would see frequent misbehavior in my code where it looked like multiple subsequent sessions were being corrupted somehow, perhaps from a previous session. I was certain that I was releasing resources from the previous session and destroying all of it. I watched my code hit my `boost::shared_ptr<foo_t>::reset()` and so clearly it was now gone. Right? Well, shared_ptr<> not all it's cracked up to be. So I went back to read about conventional advice about shared_ptr<> and people would frequently suggest boost::weak_ptr<> where appropriate. I mistakenly thought about these as a dichotomy for some reason. But that was no good because I couldn't share my weak_ptr<> so it's not really useful. Except -- wait -- the vast majority of the time I'm propagating my shared_ptr to places where they don't need to share it beyond themselves. So my design would actually be better if I shared the shared_ptr as a weak_ptr anywhere other than Right Here. In doing this redesign, I realized that the weak_ptr promotes itself temporarily by effectively asking "hey is this still allocated somewhere?" Turns out that other thread using this resource would occasionally take slightly longer and wouldn't decrement its shared_ptr until after the new session had started, which would mean that the old resource was never destroyed. After the redesign in this case where the background thread loses the race it would just fail the weak_ptr<> promotion and harmlessly skip its activity.
Motto: shared_ptr<> and weak_ptr<> help preserve an ownership metaphor. Which code Owns this memory/resource and which code is just "borrowing" it?
I have solved about ten "hard" problems in my career, most of which has been in R&D. Each one of these had multiple prior failed attempts, and in some cases took me months of thinking before I could find a solution.
1. Qualcomm wanted me to devise a computer vision solution that was more than two orders of magnitude power-efficient than what they had then. There was a clear justification existing as to why such a drastic improvement was needed. Nobody had a solution in spite of trying for a long time. Most laughed it as impossible. I started by looking for a proof as to why it could not be done if it indeed could not be done. After some three months of pulling my hair, I started getting glimpses of how to do it. Some three months later, I could convince myself and a few others that it is doable. Some three months later, the local team was fully convinced. Some three months later, the upper management was convinced. You can read the rest here: https://www.technologyreview.com/s/603964/qualcomm-wants-you...
2. I wanted to solve a specific machine learning and Artificial Intelligence challenge. I would code for a day or so, and then again run into days of thinking how to proceed further. E.g., coded a specific parser algorithm for context-free grammars, including conversion to Chomsky normal forms, in 1.5 days including basic testing. However, what's next. Woke up with new ideas for about ten days in a row. Conceived Neural Turing Machines back in 2013, about a year before Google came up with their paper on the subject. (Unsurprisingly, I did not had that name in mind for it back in 2013.) I also did not get an actual opportunity to work on it, as a result of which I am still not sure if I could have actually done it.
3. Needed to make a very sensitive capacitance measurement circuit, trying to get to atto Farad scale floating capacitance even with pF scale parasitic capacitance to ground. The noise and power requirements were very challenging. After about three months of seeking inputs from the team lead without hearing a solution, I ended up coming up with a solution. I later discovered that the technique was already known in RF circles, though only a few were aware of it. Capacitance measurement circuits with such sensitivity did not show up in the market for several years. (My effort was target at using inside a bigger system.)
4. I was working on measuring bistable MEMS devices. The static response of these was well understood. However, so far, the dynamic response was only measured by the team; there was no theoretical explanation behind it. We invited several professors working in the field to give seminars to us, and asked questions for this, but never heard back a good answer. A physicist colleague found an IEEE paper giving the non-linear differential equations behind it, which worked, but yet provided no insights into the device behavior, and took time to solve numerically. I wanted a good enough analytical solution. I kept on trying whenever I had time-opportunity, while the physicist colleague kept on telling me to give up. Six months later. I woke up with a solution in mind, and rushed to the office at 7 am to discuss with whosoever was there at work at that time. The optics guy I found did not fully understand it, but did not find it crazy either. A few hours later, the physicist friend confirmed my insight by running some more numerical solutions. I could then soon find tight enough upper and lower bounds, and the whole thing fit the measurements so well that most people thought it was just a "curve fit". (It was pure theory vs. measurements plotted together.)
5. I proposed making pixel-level optical measurements on mirasol displays using a high-resolution camera to watch those pixels after subjecting them to complex drive waveforms. Two interns were separately given the task (surprisingly without telling me), and both failed to develop algorithms for pixel-level measurements. Later a junior employee worked on it, was unable to develop pixel-level measurements still, though was able to get it to work at lower resolutions. The system took about 40 minutes of offline processing in Matlab. Later, a high-profile problem came up where pixel-level measurements were a must, and I was directly responsible for solving. Solved in one day. Processed images taken in real-time, not 40 minutes. The system stayed in deployment for years to come.
6. We had bistable MEMS devices, and there was a desire to make tri-stable MEMS devices. Several people at the company attempted it, including a respected Principal Engineer, but no one could figure how to even start. I could not figure either at the outset, but started bottoms up from Physics and using Wolfram Mathematica to create visualizations around the thing. And bingo. In a few days, I had not only figured how to make these tri-stable MEMS devices, but also multiple schemes of driving them. My VP's reaction was "Alok, you should patent that diagram itself", given the clarity it had brought on the table.
7. We were creating grayscale/color images using half-toning. A famous algorithm, Floyd Steinberg, works very well for still images but has lots of artifacts for videos. An PhD student working in the field was brought in as an intern, nevertheless, the results were not great. The team also tried binary search algorithms to find the best outputs iteratively, however, it was not implementable in real-time as needed. I was interested in the problem, but was not getting time to give it a fresh thought that it needed, until one day. A few days later, the problem was solved. I developed some insights into it and just had the solution coded, to the surprise of people who had spent months working on it.
I could go on writing about more cases.
So, um, how would you say your skills deploying to NodeJS are. Would you rate them as strong? Tell you what, lets go ahead and break for lunch now and Sam is going to show you around the campus a bit and then we'll continue with a follow up and some coding challenges. """""
So much work involved. Very complex problems, needs a lot of theory but also practical knowledge. Needs good debugging skills. And endless amounts of time.
One benefit of using the STAR technique is that you are not going to ramble. It should not take you more than 1 minute to fully lay out the Situation, Task, Action, Result. After that "executive summary", if they want you to go more in depth, the interviewer(s) can ask you.
https://en.wikipedia.org/wiki/Situation,_Task,_Action,_Resul...
I feel like STAR is important in the same way and for the same reason as 'start with the punchline'. Both are good ideas, and both are aiming for 'keep it short and relevant.' Which, having interviewed and hired many people over the years, I'd have to say is reasonably good advice.
There are plenty of exceptions to both of these ideas though. I probably have more trouble getting engineers to elaborate on something than I have with them going on for too long. I quite enjoy a candidate who will help me carry a conversation, who will ask questions of me, who will offer and inject relevant or interesting side-details into their story. Going on a tangent isn't a bad thing unless it's negative or irrelevant.
I will make following broad points:
1. Never walk into that room without practicing. Practice before a mirror, practice before a friend, practice in a car. Have a written script and optimise it to remove redundancy, highlight achievements etc.
It is not about repeating what you have practiced but having a free flowing conversation where you don't have to struggle for words, sentences all while maintaining a confident posture.
2. Converse not interview
A lot of people fail to keep the conversation going. It is not like a FBI investigation. It is more like a friendly banter. Think of a scenario where you are talking to a potential roomie. It is okay to walk out of that interview without an offer but then you should feel good about having conversed with another geek just like you.
---
Maintain the mindset outside of interview preparation. Most people fail at this.
Good interview preparation begins months ahead. You need to look at your co-worker's code, give them feedback, learn to make needless improvements in your existing code, solve algorithms and discuss technical problems on stack overflow and else where. Built a mindset where you are able to talk about technical work to other people. Speak more, listen more and advice more at least 3 months ahead.
This is good advice but it applies to both sides.
The best way to learn about a person's technical background is to start from a common base and go over their experience. I like to start talking backwards from their resume, and say "OK, Job A. What were you focused on there? Your description mentions technologies B,C,D. How did you apply them?"
You then just take it from there, pick up on the things they discuss to get into the technicalities. Ask them hypotheticals. Ask them how that technology could apply to a different problem set. Ask them about things that annoyed you specifically about those technologies in the past and how they addressed/resolved them. etc.
This is the best way to interview in my experience. It keeps the pressure low, it doesn't waste time on rehearsed answers, it doesn't waste time on whiteboarding unless it comes up (a very basic takehome project (30 minutes) should be given pre-interview), it lets the person discuss their experience and provide real feedback about the things they've learned. It gives them the opportunity to discuss their technical habits, values, and interests. It reveals the most about the candidate in the minimal amount of time.
So many of my colleagues would lock up when they'd go in to interview people and not know what to do. They'd sit there and just expect the candidate to know what they wanted to see and carry the whole thing. They'd print off a list of questions that they found from a site about how to interview people, or they'd give them a code trivia quiz that is a massive waste of time for everyone.
All of that is very silly and misses the point. Everyone just needs to relax and hold an unscripted technical discussion. You can go in with an outline to make sure you hit the topics intended in the course of the discussion, but shouldn't need more than that.
The skills of a con artist are not related to the ability to build good systems.
> The skills of a con artist are not related to the ability to build good systems.
lol.
Communication is about moving information through someone's senses and into a model constructed in their head. If you can't effectively communicate about yourself, the interviewer is going to make more inferences about you and may focus on areas that aren't your strengths while not even knowing to ask about strengths you think are very relevant.
It would be great if they could just sense your innate value through your aura, but it's not going to work. Being able to talk about yourself may feel uncomfortable or like self-aggrandizement, but that's actually a great reason to practice it. The interviewer wants to learn about you but also has a bunch of other explicit and implicit goals (get through the interview questions, to not be incredibly bored, etc), so there's no reason not to do a good job at honestly telling them about yourself.
For every gregarious person who uses their communication skills to fake competence, there's at least one person who is convinced that they are a misunderstood genius, but their lack of supposedly BS communication skills has cursed them from ever being fully appreciated by the "normals" that they think they are better than. You don't want to be either one of those people, they are equally useless when working on hard problems.
I think it is very very rude of you to call it a skill of "con artist". I have seen teams with average individuals achieving lot more than several very intelligent people simply because together they worked lot more better. Any company who ignored the communication and personal skills of their engineers is bound to fail.
> Where do you see yourself in 5 years?
Dead.
> Why do you want to work here?
You have money.
> How do you handle disagreements with coworkers?
Attempt constructive engagement, and if that doesn't work then shun them.
Have you thought about going into politics or drugs instead?
> Dead.
Did you say you have a three-year vesting schedule? I'll be at another company probably, since you'll have me doing two jobs for my original compensation and title after the second year.
> In general, real stories are told chronologically backwards. This is why we start off with a punchline. In contrast, practiced stories are told chronologically forwards. It’s a solid indication as the interviewer that the person is reciting something they have committed to memory if they tell the story forwards, and in turn it’s significantly more likely that the story isn’t entirely true.
I have a friend who - bless his heart, I adore him, but can't get a quick story out to save his life. Every point he makes he reserves the punchline for last, and he starts by going on a back-story tangent first which usually forks into multiple back-stories. I've been trying to nudge him to turn it around and give away the punchline first, but he's deeply convinced that good stories are like movies and need to have a backstory followed by a narrative arc that doesn't make it's final point until most of the way through act 3.
This is awful and just completely untrue. Many companies that take the time to want to do interviews properly will have something similar to STAR or SOARA implemented, and you'll be starting with the situation, move on to the tasks/target you wanted to complete or hit, the actions you took to achieve that, and the results of what you did. This is chronologically forwards.
This comment is the kind of psuedoscientific crap that makes interviewing a crapshoot and is a good indication of an unstructured interview.
The broader point of that quote is that a dynamic conversation usually does reveal more truth and paint a more accurate picture than a practiced story. I find that to be very true.
I feel like you might have misunderstood the article and decided it was wrong before taking the time to understand. That could be an indicator of poor writing in the article, or of excerpting and discussing a quote out of context, but is it helpful to respond with hyperbole?
STAR & SOARA do not dictate a chronology, so they are orthogonal to this point. But their goals align with the article & this quote almost entirely, if you think about it.
(This is one of those things that's going to bug me for a while every time I tell a story.)
I would rather disagree to the idea of telling stories backwards. We aren't doing Memento things after all! :-) It's best to tell the (true) story the true way, the way it happened.
When giving information to someone, sure, lead with the important stuff. But when telling a tale? Pff, no.
For that matter, there are classic examples of movies that tell the story backward, or give away the ending first.
There's some difference between headline first and punchline first, but either way the real point being made was turn it into a conversation by giving the shortest possible answer first, and letting the other person request the backstory as needed. Make sure they're doing some of the talking and driving the direction of the story. Make sure they're interested and controlling the direction and amount of your narrative.
OTOH, if you're in a setting where it's expected that you'll take ten or forty five minutes to tell a good story, then a narrative arc that increases tension for a while is probably a really good idea.
On my Linkedin (and also resume etc.) I give a link to my blog / github. Every time I've been asked about it in an interview setting, it was actually when I was the one conducting the interview, and the interviewee was trying to impress. Much as it pains me to say, I don't think side projects are a good way to bolster your CV, at least in my field.
> What is the hardest technical problem you have run into?
"Tell me an unverifiable story in which you're the hero."
I really hate this question:
> Where do you see yourself in 5 years?
Any post on HN about interviews draws a ton of comments, and they're usually the same comments as every other post on the subject.
Honestly at this point having gone through a reasonably large number of interviews I think it comes down to brushing up on basic CS knowledge and, more importantly, whether or not they like you. As much as we like to make interviews dispassionate assessments of proficiency it really does seem like basic chemistry is the key issue. And honestly that makes a certain amount of sense: most people don't want to work with someone they dislike.
The article doesn't really justify the process people go through as a good one. People who think they have a good approach to interviewing, but their sample size is too small to back it up or worse, present an opportunity to people who are good at telling stories but may not have the skills to go along with the story in the end. People who hire based on the story-telling experience will eventually get burned.
I think someone should read this and feel a little worried. This is story-telling. People who might argue that telling a story is being able to communicate---it's merely one form of communication of many that are needed depending on the work environment, and results in a blind spot for your team's hiring process.
Much of how we discuss what we do can't be empirically precise, unambiguous, and scientific. Certainly the more social and abstract aspects of our jobs begin to sound, as you put it, like storytelling.
(To be clear, it's far from the only thing I look for, and can be taught/learned to a degree, but I consider it as a key skill in the broader bucket of "communication"; alongside problem solving and base competencies. And like programming itself, having some experience helps.)
Moreover, getting people talking is a great way, in my experience, of identifying strong thinkers, strong coders, and strong experience. It helps you see someone's personality, it helps you literally get to know them. I can't think of any reasons why I would worry about that before hiring someone. I would worry about not doing it.
I would like to know what you would offer as a better alternative approach? Do you prefer the idea of coding questions to stories?
Wait a sec -- just because the author of the article doesn't know how to get value from those questions doesn't mean that those questions hold no value. It is true that they won't give you information to help you in a tech screen, or to gauge the value of where to initially place a candidate on a team. But if you are trying to decide between a few candidates of equal skill, and trying to figure out which one will work better in a team environment, which will fit more smoothly into the personal dynamics between team members, who will grow better as the company grows, who might be a better leader or follower, and what their trajectory might be as the company and team evolves, these questions can lead you down those paths.
Dismissing those questions as useless makes me think the author doesn't care about the people as individuals, but just as machines to be plugged in to produce code. And that doesn't sound like someone I would want to work for.
On conscientiousness:
Barrick, Murray R., and Michael K. Mount. "The big five personality dimensions and job performance: a meta‐analysis." Personnel psychology 44.1 (1991): 1-26.
On work samples:
Roth, Philip L., Philip Bobko, and L. Y. N. N. McFARLAND. "A meta‐analysis of work sample test validity: Updating and integrating some classic literature." Personnel Psychology 58.4 (2005): 1009-1037.
It actually worked really well - it brought some projects I'd forgotten about back into my mind making it easier to talk about them, and gave the interviewers specifics to latch on to.
For example, I get this a lot as an opening question, mainly from crooters (actual hiring managers almost never do this): "What are your skills?"
You mean like numchuck skills, bow-hunting skills? If you don't know what kind of skills I have that could possibly be germane to the positions I'm looking for, you obviously didn't even read my résumé, which means you don't have a clue, which means I am hanging up on you because obviously you can't help me.
If you say "Can you tell me about your role at company X, what sort of challenges you had, etc." I'm more willing to open up.
Recently interviewed a candidate who seemed promising, until they started to rant. I didn't want to interrupt them because I was hoping there was a point to be made at the end of the rant ... but in the end it was just 5+ minutes worth of "my current job isn't fair and everything sucks and everyone who is better than me really sucks too".
Didn't hire.
After an enjoyable conversation, the hiring person will rationalize wanting you all by themselves, even if they have to make up / project qualities you've never demonstrated.
95% of the time, there's nothing rational about hiring.
Almost always asked from companies that don't have problems to offer even remotely comparable to the "war stories" they're expecting to you rattle off -- at least not for the position you're applying for, anyways.
well technically, the hardest problems I have encountered are not technical