All the solutions presented started by declaring an 8X8 board, with a 1 or 0 in each cell to represent a queen. Then there was some two-dimensional iteration over the board, testing if any queen was on the same row, column or diagonal as any other.
I'd dismissed that solution as inefficient from the start. Since only one queen can be in each column, I represented the board as an 8-element array with each element containing the row number of the queen in that column.
My 2nd insight was, only one queen can be in each row, so the elements of the board array were simply the numbers 1..8. The search consisted of permuting the digits, then testing for diagonals e.g. test pairs to verify abs(difference) != abs(i1-i2). That is, their row numbers were not the same as the difference between their column indexes.
Finally, when permuting, you could test the elements as you went and trim the permutation tree drastically if you found a failure. No sense permuting the right-most elements of the array if there's already a diagonal conflict in the digits placed so far.
The whole thing ran in trivial time. The contest winners ran in hours or days (this was all in BASIC on <1Mhz processors back then). Made me wish I'd actually entered the contest. But I didn't bother, I guessed many people would have found the same solution.
If anything, specify traits that define the movement of the pieces and have each piece extend that trait (with the Queen extending both CAN_MOVE_DIAGONALLY and CAN_MOVE_LATERALLY).
[0] https://en.wikipedia.org/wiki/Liskov_substitution_principle
It's the interviewers who are ignorant here, they took a real problem and translated it badly into a toy problem. Then they failed to realize what they did.
The article's author isn't just unaware of them, they've written a petulant, arrogant article demonstrating their own lack of experience and adaptability.
If OP wanted to crush the question, they could quickly answer it by saying this is how you could implement using classes and inheritance, now let me explain why OO is a poor solution for this specific problem, and what kind of problem sets OO design is most useful in.
I also think optimization matters, if you can reduce the memory consumption with a simple and elegant solution, why not ?
But the interviewers seemed to also miss the ball, making irrelevant and false objections.
I mean, does this guy really think the company cares about efficiently representing chess game states??
Obviously, obviously, OBVIOUSLY, the Chess aspects of this question are irrelevant. What they are trying to find out if how well you will work on their actual codebase, which is presumably not comprised of global functions operating on bitfields
Yes, that is, it is precisely addressing the question asked, not some unstated drrff hidden intent.
If the tester wants to test ability to apply a particular paradigm, that paradigm should be part of the explicit question. If you want to use a chess representation as a vehicle for testing OOP skills, than specify that object orientation is a requirement in the question.
Otherwise, you should be judging the interviewee on how well they did on the challenge you actually posed. Otherwise, what you are really judging on is whether OOP is the tool the interviewee reaches for no matter whether it's really appropriate to the problem they are faced with.
Some aspects of his solution aren't. He just took a holistic approach instead of the classic reductionistic/analogy-with-not-so-real-world one. Holistic approaches are applicable to much more than chess. Game engine for instance often benefit from Entity Component Systems, which are effectively in-memory domain specific databases, quite unlike OOP. More generally, the Data Oriented Design from Mike Acton is applicable in any setting where performance matters. Another example is functional programming, highly suited to data transformation and symbolic processing, such as found in compilers and interpreters (probably not he high-performance ones, though).
Objects have always been about modeling and design, a way to talk about a problem using language that is more similar in how you might talk about it naturally. Object-thinking is much more important than anything else. There are many different ways to spin this, but most game engines and simulation environments benefit from this in spades (and have ever since Simula 2).
However, this entire thesis is founded on a false dilemma, because you can write an OO domain model with a compact representation.
That way you’ll end up with something that works much sooner, instead of getting caught up in designing layers of abstraction.
A much better term than abstraction to me is "semantic compression" (got that from Casey Muratori of Handmade Hero). This basically means factoring out common bits with the goal to express the solution with the least amount of repetition (while of course taking care not to factor out parts which are only coincidentally common. I figure that's the "semantic" part).
To do semantic compression you need abstraction, but not pointless abstraction -- just the right amount.
Nobody programs without problem-space abstractions any more; this is effectively what you get with functions and libraries. When someone uses a prebuilt tangent function, that's working in problem-space.
Language-space abstractions don't pull the same weight. If they did, Haskell programmers would be so much higher productivity than C programmers that the latter would be simply competed out of the market. Instead we see marginal benefits against marginal costs, and the gamut of C, C++, Go, Haskell, Python, Javascript, etc. are, if not equally good, at least sufficiently similar that there is genuine debate.
If in doubt, abstract the problem. If you do an abstraction and don't have less work to do afterwards, maybe you've abstracted the wrong thing.
That implies a much smarter market than I believe exists.
I think it’s possible (not arguing it’s true) that Haskell is dramatically better yet entrenched interests (e.g., managers afraid of not being able to find programmers, and programmers afraid of being obsoleted) block the logical conclusion.
For example, I recently needed to randomly reorder a collection of objects, weighting some of them more heavily than others such that 'heavier' objects are more likely to come first. My first implementation was a mess, and was hard to test. So I created an abstract data structure that allowed inserts with weights and polls for a random value, applying the weighting. My business logic could use that abstract thing presuming it worked. The tricky part was hidden, and the code was cleaner.
In the case of the article, the abstraction the interviewer wants doesn't actually make the code clearer. It's just looking for checking off check boxes that the candidate wrote the word 'class' and jumped through the hoops the way the interviewer expected.
All work has abstractions, even physical work. Consider a hand saw. In detail, its operation and efficiency relies on the teeth, their angle and sharpness in relation to what you're sawing. But the abstraction is that you can take a hand saw, place it onto a tree branch or wooden plank, and start moving it back and forth until the cut is complete. Surely some of the teeth will be worse than others and eventually they will be so blunt that the abstraction fails and you cannot complete the cut until you sharpen the saw. But in the average, the abstraction of sawing works for hours and hours, generally days and days, in a row.
Having such an abstraction elevates our thinking to consider different ways to cut wood, make joints, fitting ends, etc. That is all possible only because of the abstraction of how a saw works: it frees you from thinking about how the saw operates while using it and using those mental cycles to think about what you want to saw and how.
Programming is the same. There are abstractions for modelling the problem, there are abstractions for building the program, there are abstractions for data presentation and flow, and the whole computer you're using is a huge stack of abstractions so that when you press 'b' the letter will appear on your screen as 'b' no matter what the hardware and system your computer is running.
Sometimes —— actually more often than not —— abstractions are an easy way to add useless complexity. It usually happens when you don't yet know how to solve a problem, so you first abstract it to finish something else and effectively post-pone the hard problem further. The problem is you can do it again, and then again.
When you finally do get around to solving the big problem you already have an interconnected mesh of various abstractions and mechanisms in place —— those that you concocted at the time you were still guessing what the solution might look like —— and you have to retrofit the correct solution on top of those.
Ideally you would've solved the hard part first at which point you'd already know which abstractions, if any, you will need to efficiently express the problem and the solution in simple and understandable terms.
My chess program - Dorpsgek - has an 800 byte board structure (though I recently reduced that to about 400). It uses techniques that the computer chess community considered to be impractical. But still it works, and it works because the space that I use per board contains very useful information, such as attacks to a given square.
The advantage that bitboards give are not space by any meaningful quantity - for an alpha-beta search of depth d with a branching factor of b you will allocate at most d boards - or even 1 board - at any given node, not b^d boards. The advantages that bitboards give are speed and information density.
And to anybody who thinks that a bitboard approach is not sufficiently generic for different board sizes, wrap the bitboard using an arbitrary size bit set and the problem is essentially solved.
On the flip side, if the interviewer learns something from you in an interview and that means he doesn't want to hire you, aren't you glad to have escaped that work environment? Assuming, of course, that the interviewer is representative of the people you would be doing actual work with.
However it has definitely been the case where I’ve contended with some Senior Engineer who has found the light in OO programming, inheritance, Template Method, etc, etc and no amount of dialectic will change his mind.
This programmer often doesn’t see the severe impedance that his OO code, his type hierarchies, etc are causing to agile delivery of the new and newer software. How do you show an OO-religionist...the light, so to speak? These quesitions are not easy to get into a lab and give him the hard numbers.
Just an example here: Matrix algebra is easiest done with first-class matrix abstractions. So my compeer puts everything into some other abstraction. And I say, look, look how much easier it is to do our work when our data is modeled differently? And he says “that’s not easier” — okay, you are objectively right, but your peer doesn’t “see it” or refuses to see it, or what, I don’t know— but this is almost de facto industry situation—you can’t prove that your ergonomics are better because your peer doesn’t even speak in proofs anyway.
Now I could go find another job but my equity is actually worth something here (it really is), so I put up with it. But it’s frustrating nevertheless-and having a better abstraction only costs my coworker to learn something. It’s not going to make my equity any less valuable—might make it more valuable, if anything.
https://github.com/cglouch/snakefish
(I cheated a little by not implementing castling / en-passant since it's a pain, and my engine is still really slow, but hey it works!)
So this is a failure of OO abstraction. Why? OO abstraction is really complex and makes many forced moves that provide little value here. Inheritance and a focus on “nouns” makes idiomatic code highly specialized to certain domain models. Unfortunately, these are rarely useful.
If you would design for Chess 2.0 and you expect some game designers to change the rules every week (thinking up new kinds of pieces, changing the rules for existing pieces, changing the board layout, etc), would you still use Bitboard? Maybe it would be better to focus on the "nouns" the game designers use and keep optimizations like BitBoard in mind for later?
But setting aside performance concerns, when we speak of a “good” abstraction we are usually (or should be) saying this is good for some purpose—good for readability, for example.
But even better—or of utmost importance in the real world-is this: is the abstraction under question “good for business”? And that is entirely asking this: does the abstraction allow for rapidly acting on business ideas, creativity, needs, etc.
However, I believe that once the context is fixed/agreed upon, that there is an objective answer to which of this or that abstraction is better. However experience in the practical world of today’s software development painfully has shown me that the “better” abstractions are harder to come by...and when “found”, don’t tend to stick. This is because most practitioners don’t have the ability to produce powerful algebraic systems (which is what “good” abstractions are—“alegebras” over a business domain) because practitioners are generally not mathematicians, even have a philistine dislike/disdain for powerful systems if they have a whiff of mathematical-like power to them at all.
In this sense one could argue for an abstraction being “good” with respect to the social context in which they are operated in (i.e., if your team members don’t understand how to correctly wield an abstraction, is that abstraction good?) However I don’t like these kinds of arguments bc a lesser system is still capped in its power even if all its users understand it.
There are limits in what you can do with, say, Euclidean Geometry, even if it is much simpler to understand than other Gemotries. An often retort to this is No it isn’t. But that usually comes form perspectives with limited imagination. That said, many businesses are fine and thrive with limited imaginations.
Yes!
>"Programs must be written for people to read, and only incidentally for machines to execute." - Abelson, preface to [SICP](http://mitpress.mit.edu/sicp/full-text/sicp/book/node3.html)
What you describe with algebras, I would label the "precision" [1] of an abstraction. Precise abstractions/algebras require a good specification though and that is often missing in the business domain.
[0] http://beza1e1.tuxen.de/leaky_abstractions.html [1] http://beza1e1.tuxen.de/precise_abstractions.html
So if we’re talking about “facts” in an “ontology” — these are still (they must be if they are going to be processed by a machine) concrete formalisms - that’s what I mean by algebras.
If we are not machine-processing these ontologies but just printing them out for users, then I don’t count that. Because we’re not really programming over those abstractions. We’re just giving them back to the humans.
In a modern, state of the art chess program that uses alpha-beta with a branching factor b and depth d, you will have at most d boards allocated, or even 1 board allocated. Neither of those figures approach b^d. That makes board size largely irrelevant, except for the amount of memory copying needed, which for a d-board approach would be b^d copies (a single board approach would only mutate that board).
EDIT: One major issue I just noticed with the article is that the two differing implementations are not apples-to-apples equivalent. One uses bitboards, based around manipulation of bits, and another one is akin to representing the board with an array.
Don’t all pieces have a location? Aren’t all pieces movable? Might we want to display pieces? Do we want to journal them to files to save game state, or their moves to streams to play remotely?
All of these things can be done procedurally, but they also fit nicely into an OO design.
Make a table containing locations for each piece.
> Aren’t all pieces movable?
Make a function ("procedure") that moves a piece (e.g. edits the location table).
> Might we want to display pieces?
Make a render function (e.g. gets a piece type and a position)
> Do we want to journal them to files to save game state, or their moves to streams to play remotely?
Write serialization and deserialization routines (again, procedures that get piece type and position).
No need to cram that in to some idea of piece "class". That only glues things together that don't belong together. OO is pretty much crap.
The real advantage of OOP, as used today in Java/C++, is instantiation. Instead of having procedures working on global variables, you have procedure working on local variables, including complex data structures.
Instantiation took some time to get widespread adoption. Originally, even local variables were actually global, scoped variables, thus forbidding even recursive calls (see earlier versions of FORTRAN). Programming languages since use a stack to instantiate their local variables (and return pointers), thus enabling recursion. The instantiation of more complex data structures (arrays, user defined structs…) followed.
Ironically, instantiation took some time to become ubiquitous in the C language even though it fully supported it from the very beginning, with a stack and user-defined compound types. Case in point: Lex/Yacc, which use global names (and state) by default.
Now however, instantiation is so pervasive (global variables are assumed evil by default), that we don't even call it OO any more. We just call it good practice.
Secondly, OO is more easily extensible. If the company wants the game to support other size boards, bit wise storage has to be ripped out. 90% of the work in 90% of the jobs is writing clear, easily maintainable and extensible code, not maximizing performance. Frankly, if I interview someone who answers a question like OP, i pass thinking their code will be a premature optimization nightmares.
Programming IS abstraction.
In fact, that's pretty much ALL programming is: using, defining, and implementing abstractions.
I wrote more on this here: http://beza1e1.tuxen.de/precise_abstractions.html
Abstracting is not everything there is to programming, but modern code consists nearly completely of abstractions.
The point is to allow programming things like the AI algorithm without caring at all about if the board is represented with a hash map, nested arrays, or a bitboard. In all cases, all the AI cares about is move generation, not internal representation.
Abstraction does not hinder ideas like the bitboard, it enables them. Both the interviewer and the interviewee are missing the point.
A cell is either empty, or contains one of the 12 figures.
To store 13 values, you need 4 bits per cell i.e. 256 bits for the complete board. That’s 4 unsigned longs.
For example, where would the equivalent of "findMoves" be declared in the latter case? Is the bitmath abstracted out into a set of helpers, or is it done in one big function?
Edit: The above is what I would say if someone presented this approach in an interview.