What if Apple decides to change the Mac Pro form factor for the next iteration? Then you have to retool and are left with a bunch of incompatible chassis. What if Apple stagnates with hardware upgrades? You'd be stuck running obsolete hardware. What if Apple discontinues the entire Mac Pro line? Not to mention the price premium of Apple hardware itself, then the time and expense incurred to design and fabricate this.
The fact that their software depends on Apple's graphics libraries doesn't seem like a good justification for doing this. What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify. Certainly there will be other "debt" to overcome, especially if much of your codebase is non-portable Objective-C or GCD, but that has to be weighed against the possibility of your only vendor pulling the rug out from under you. And looking at Apple's history, that is a very real possibility...
Owning your hardware like this makes complete sense if your core business is directly tied to the platform itself, eg an iOS/OSX testing service. But as far as I can tell, imgix does image resizing and filters... their business is the software of image processing, and they're disowning that at the expense of making unrelated parts of the business more complicated. Not a good tradeoff, IMO.
If you're worried about them not matching some piece of client software exactly (Quartz Composer, Photoshop, etc.), you still have options. And those options - e.g., webapp for previewing/something else/etc. - you'll probably want anyway, for the benefit of designers that don't run OS X.
(The filtering aspect of the system I find a little surprising anyway - the idea of an image-focussed client-aware DPI-aware CDN makes sense to me (and I like it!), but something that does your Photoshop filters in the cloud sounds less compelling. I would have expected people to prefer to do that locally, and upload the result. But... while I've worked with many artists and designers, I'm not one myself. So maybe they'll go for that. And/or maybe a lot of their customers take advantage of the fact that the processing appears to be free. And I'm prepared to contemplate the possibility, however unlikely, that they might know their customer base better than I do.)
Uploading pre-edited images takes time/resources, and in general a lot of our customers rely on us to do all of their image processing so that they don't have to.
Additionally, creating edited versions of images in advance presents two problems: 1) Any future site redesigns or edits must now be applied en masse to the existing images or risk older images not complying with the new scheme, and 2) Instead of only managing the one original source image in the origin, now we're talking about maintaining all of the different edited versions, which is very inefficient from a storage and image management perspective.
There are many advantages to applying all of the image transformations on-demand, rather than in advance. Keep in mind that we are not simply photo filters, but a full end-to-end image processing offering (which applies everything from simple photoshop edits like cropping, face detecting, color correction, watermarks, etc. to automatic content negotiation and automatic resizing/responsive design) that works on the fly; this means that our customers now can make batch edits to their entire corpus of images through a few simple code edits.
This can become extremely cost-effective, but also helps in reducing page weight significantly.
Hosts have a really nice markup, compared to hosting yourself. Hosts make a lot of sense for small companies who can benefit from the aggregated demand and capital costs being spread over many clients.... but not when you're at the level of building your own datacenter, or even using a full rack.
It's funny how since 1980 people have been talking negatively about Apple as "vendor lock in". For most of that time it was advocating vendor lockin to windows.
The thing is when you build your system on an OS or hardware choice you're making "vendor lockin" to that platform. Build on Linux and you're locked into just Linux, unless you port.
There is little risk being "locked in" to the largest most successful company in the world. Plus the costs being dramatically lower given the rabidly higher performance of Apple's technology for this particular service more than covers the cost (in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.)
If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.
Owning your hardware makes a great deal of sense when you are operating at scale.
You have no idea what the comparison is, and I don't either. But again, the criticisim is around running a business off of a bunch of Apple "trash cans".
>advocating vendor lockin to windows.
Linux is no lockin, Windows lockin via software vs. Apple for hardware and software.
>"vendor lockin" to that platform
Java, Scala,or any other JVM language protects from that, and to a lesser degree Python, PHP does as well.
>There is little risk being "locked in" to the largest most successful company in the world.
Price gauging? Deciding not to support your platform anymore? Forcing you to upgrade?
>Build on Linux and you're locked into just Linux, unless you port.
Only that there are a bunch of Linux options to choose from, they are all open source so you can do whatever you want as far as upgrade paths and support, and if you use the JVM languages this isn't an issue.
>in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.
There is no fact there, that's your delusional opinion.
>If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.
It's a CDN + image manipulation tool, you don't need 3D libraries. And if you use exiting libraries or tools, it is quite trivial. Here is their API: http://www.imgix.com/docs/reference
> Why pay 5-10X as much to host on AWS?
Nobody said anything about AWS...
> Hosts make a lot of sense for small companies
Sure, nobody is disputing their choice of colocating themselves.
> OS or hardware choice you're making "vendor lockin" to that platform
It is abundantly clear that the vendor lock-in refers to single sourcing your hardware. That problem is nonexistent on Windows, Linux, BSD, etc.
> I think one Mac Pro probably replaces 4 or 5 Linux boxes
Oh come on, now you're just talking crazy... see other posts in this thread for a cost/performance comparison.
> you're not understanding what it is that they are doing
On the contrary, I think I understand better than you. Do you perform a lot of image processing work on various platforms (including OSX and Linux)? I do.
If you want to run a business that builds/tests using the osx/ios ecosystem this is the only way to do it legally. Apple's licensing terms enforces this. Otherwise we'd be running OSX on generic pizza box servers since Apple's hardware is truly overpriced and not built efficiently at all for the datacenter (they work fine on desks). Apple really gimped the 2014 mac mini's btw. They perform worse than the high end 2012 mac minis.
What barrier to entry? Their customers don't care that OSX is running under the hood. You can offer an image processing service using any platform today. Sure, on Linux it probably wouldn't be as efficient, but it doesn't have to be. Scaling is a Good Problem to have.
Basically, as you grow, it helps to take a critical look at risk factors and the technical debt which contributes to that risk. The longer you wait to pare down that debt, the more expensive it is, and the more exposed you are to that risk. A little more work up-front saves a lot of work later on.
It seems as though they're prepared for this. This version 2 of their process is already moving away from an existing Apple form factor to a new one. It doesn't seem to be a leap in logic to consider that, should a new form-factor be released, they'll modify their rack cases again.
What happens if a random upgrade causes major performance issues, or worse, just flat-out breaks their use case?
Looking at you, PS3 clusters.
Given recent history, that's not going to be for a number of years.
"What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify."
Hardware will almost always cost less than engineers.
That is something that no one outside of Apple can say for certain. It doesn't even have to be a major change, but something like rearranging ports, adjusting taper or extrusions on the chassis, etc. Those kinds of adjustments happen all the time on consumer hardware, and most people don't notice, but may be an issue if you're trying to fit into precision machined slots.
> Hardware will almost always cost less than engineers.
For commodity, off-the-shelf hardware, absolutely. This is anything but, and still requires engineering effort to design, fabricate and assemble. And it's not always about the immediate dollars: sometimes a fundamental reworking means sacrificing short-term savings in favor of the long-term: flexibility, risk mitigation, reduced operational complexity, and cost over successive generations of hardware.
First, this is awesome. Just like I want to live in a world where people are paying picodollars for cloud storage[1], I also want to live in a world where a bunch of mac pro cylinders are racked up in a datacenter. Very cool.
Second, this is complete silliness. I'm not going to go down the rabbithole of flops per dollar, but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead. Whatever management overhead gets added by playing the hackintosh cat and mouse is certainly less than building new things out of sheet metal.
Let me say one other thing: right around mid 2000 was when certain companies started selling fancy third-party rack chassis gizmos for the Sun e4500, which was the cadillac of datacenter servers at the time. Huge specs on paper, way underpowered for the money they cost ($250k+) and the epitome of Suns brand-value. And there were suddenly new and fancy ways to rack and roll them.
This reminds me a lot of that time, and that time didn't last long...
[1] Our esteemed competitor, tarsnap.
I have contacted a lawyer for this (I wanted to run Hackintosh in the office), the language is very clear. The author of the software has the full power to license its use to you with any restrictions they find necessary no matter how ridiculous. If Apple only sells you the license if you promise not to run it on a thursday, you'll be in violation of their terms if you run it on a thursday.
Indeed, there is a JS library opensourced by Microsoft to decode Excel files, named xlsx.js. In the license, it is written... that it cannot run on another OS than Windows. It means even though it's Javascript, the page hosting it cannot be viewed on a Mac or Linux.
Long story short, Stuart Knightley created a clean room implementation named js-xlsx to do the same thing, without the lawyer string attached.
Although the "Only run on Apple Hardware" would probably be fine.
I've alluded to this elsewhere, but the math doesn't add up to your gut reaction. It's cheaper, but not by a significant enough margin relative to the engineering costs, to go with commodity servers and GPUs.
Building things out of sheet metal is actually easier than migrating to Linux, for one big reason: we can pay someone else to do it, because it isn't part of our core competency. In fact, I'm pushing to open source the design of this chassis, in tandem with our design partner (Racklive). Not sure if it will happen, but I'd love to see it.
There are 2 problems I see with this design:
1: You are placing the Mac Pros on their side, which may lead to premature bearing failure on the main cooling fan. Apple designed the cooling fan to be as silent as possible, which means that they optimized the bearing and the fan to work in vertical orientation. Bearings designed for thrust (vertical) orientation may not work so well if placed horizontally for a long time.
2: You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes, resulting in significant loss of space density.
Considering that Apple is a huge company that owns huge data centers, combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS, and combined with the above mentioned problems with using Mac Pros as server "logs" (because you cant call them blades), I would assume that Apple has internally OSX servers designed in the traditional blade configuration.
They may not sell or advertise them, but they MUST have them. Given that you guys are buying a ton of hardware, and are located nearby, and would be actively promoting running Apple hardware, wouldn't it be wise to at least approach Apple and see if they would be kind enough to sell you some of those blade form factor servers they simply must have.
I may be completely wrong here, but apple did brag about how Swift is the new language thats so flexible that you can make a Mobile app in it, or a Desktop app, or even a full blown social network. If that's the case, they must have some plans for the server market? No?
Any way, in the end it's a cool design, but I would seriously consider at least stacking the Mac Pros vertically to avoid fan issues. You can actually get a tighter form factor that way as well, unless space is not the issue. And if it's not, then hell, what's wrong with just placing a bunch of Pros on an Ikea shelf in a well air-conditioned room :)
Image processing doesn't require double precision, so we don't need GPUs tuned for it, which means we can use Fire Pro's and similar workstation or server grade cards.
Have you ever personally run a Hackintosh, full-time for a prolonged period of time?
It's anecdotal, but I can assure that once you're used to how OS X and the Apple hardware work together and never, ever, ever crash, using a Hackintosh is an exercise in frustration.
I had one of the known-best Hackintosh configurations in existence, and it didn't hold a candle to the MBP I had prior to it in terms of "it just works".
Sure, it was cheaper.
Guess what I did when that Hackintosh needed replacing? I walked in and dropped the coin on genuine Apple hardware without a second thought. I have never regretted it, and I'll never go back.
It's not a matter of cheaper for me, but a matter of fitting my needs. I don't want to run AMD graphics cards, I need PCI-E, I want lots of internal storage, I want really high single threaded CPU performance.
I can't buy that from Apple in a desktop form-factor. So I have my Hackintosh.
That being said, I don't disagree that Apple hardware is nice. I have a rMBP 13 and intend on replacing it with a newer model Apple notebook soon.
I did. But now I'm running Yosemite under KVM, VT-d motherboard, dedicated videocard and USB3 hub.
You can get to a point where "it just works".
An example of the sort of hack I'm talking about would be a graphics driver that says it's for the NVidia model E532D. Your graphics card is an E532E. You looked on the internet, and you found out they are exactly identical except for branding, so you dive in the driver and simply flip a bit to make OSX recognize it.
It's unlikely that this company needs all the hardware features of the Mac Pro - probably just the beefy GPUs. That's combined with the power density problems (and higher monthly costs) of this solution, compared to modern rack or blade servers, making it far worse value.
Compare this also to John Siracusa's woes over buying a new Mac: He wants a graphics card powerful enough to game on, and remain useful for a number of years. He wants to be able to get a retina display. He's for now stuck with a 2007 Mac Pro as Apple don't sell a suitable machine.
Not having a screwdriver when you need it in a pinch is penny wise and pound foolish. At best you're now out 30 minutes while you drive to Home Depot, potentially during some sort of catastrophe. At worst maybe you simply cannot do the task that you need to do, because it's 2am and you're in Frankfurt. I've worked in a lot of datacenters that didn't stock basic tools to perform tasks, and frankly it sucked.
I keep a log of all of the purchases I made for the current datacenter build. Non-server / non-structural expenses account for less than $3000, which is less than the cost of a single server. This includes storage bins, carts, shelves, workbenches, chairs, supplies and tools.
A lot of cost estimates have been thrown around (here and elsewhere). The highest that I've seen is $4000 per unit. That is simply absurd. The initial run of prototypes was far less per unit, and this was a small batch made to iron out the kinks. Economies of scale and design tweaks will drive this down even further.
The chassis design is actually quite elegant from a manufacturing standpoint. That's something that I hope will be made evident by follow-up posts that delve into more technical detail.
Licensing costs, and legal costs when you get sued for violating the license?
1. Using the same operating system as the developers of the software, plus access to Apple's fantastic imaging libraries.
2. The Mac Pro, whilst expensive, is good value for money. The dual graphics cards inside it are not cheap at all. As servers with GPUs are fairly niche, this might actually be a cheaper solution.
3. The form factor. Even if you could create PCs that are cheaper with the same spec, they'll use more power, possibly require more cooling (Mac Pro has a great cooling architecture) and will take up a lot more space.
I'd be very interested in hearing how they manage updates and provisioning, however. I can't imagine that'd be much fun on OS X but perhaps there's a way of doing it with OS X Server.
1. Yeah, the OS X graphics pipeline is at the heart of our desire to use Macs in production. It's also pretty sweet to be able to prototype features in Quartz Composer, and use this whole ecosystem of tools that straight up don't exist on Linux.
2. I mentioned this elsewhere already, but it is actually a pretty good value. The chassis itself is not a terrible expense, and it's totally passive. It really boils down to the fact that we want to use OS X, and the Mac Pros are the best value per gflop in Apple's lineup. They're also still a good value when compared against conventional servers with GPUs, although they do have some drawbacks.
3. I would love it if they weren't little cylinders, but they do seem to handle cooling quite well. The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.
In terms of provisioning, we're currently using OS X Server's NetRestore functionality to deploy the OS. It's on my to-do list to replicate this functionality on Linux, which should be possible. You can supposedly make ISC DHCPd behave like a BSDP server sufficiently to interoperate with the Mac's EFI loader.
We don't generally do software updates in-place, we just reinstall to a new image. However, we have occasionally upgraded OS X versions, which can be done with CLI utilities.
Do you mean your Mac Pros dissipate 1/5 to 1/10th as much heat as other x86 server hardware, or is there there some other factor in play that makes your AC 5-10x more power efficient?
Really interesting to hear how you provision servers, had no idea that OS X Server came with tools for that, but it certainly makes sense. I wouldn't have thought Apple would have put much time or thought into creating tools for large deployments, but glad to hear that they have.
Pop into ##osx-server on freenode if you want to talk to the devs.
How the hell did you guys get funding to do this? I can't imagine any sane person wanting to put money behind this. Could I have their contact information?
Compare a Mac Pro to an HP DL360 that can hold 4 8-core Xeons (32 cores total) and over 200GB of RAM along with a few FirePro or Titan GPGPUs, and the HP will give you far greater density (though a rack mount system with 4 8-core Xeons and 4 GTX Titans would be a power and cooling nightmare!). That said, the Mac Pro isn't as far behind as I would have expected.
But OS X also kicks ass at multithreading, especially if you use Apple's graphics libraries. It's entirely possible they get much greater performance from OS X than a Linux or Windows based solution could provide.
Also, if you're trying to sync raw images between OS X clients and the cloud, then you're going to need OS X servers in the cloud.
It'll greatly complicate the clients workflow if they can't use their built in raw converters.
I'd actually qualify this ever-so-slightly by saying "It's a good value for money if you need the specific features it offers." Which it evidently does to the OP! But many of us would prefer something with, say, one video card, one mainstream-ish desktop processor, and one mechanical hard drive, an way lower costs.
It's also a bit dear for use as a desktop machine, but it is pretty nice to have one hanging out for on your desk for a few weeks.
"Building on OS X technologies means we’re dependent on Apple hardware for this part of the service, but we aren’t necessarily limited to Mac Minis. Apple’s redesigned Mac Pro seemed like an ideal replacement, as long as we could reliably operate it in a datacenter environment."
http://chen.imgix.net/rose.png?w=560
What other upsamplers look like: https://github.com/haasn/mpvhq-upscalers/blob/master/Rose.md
Looking at the other operations available, I fail to see what is done better by Quartz than just by imagemagick.
The downsampling also isn't that great.
Original image: https://raw.githubusercontent.com/haasn/cms/master/rings_lg_...
Downsampled with imgix: http://chen.imgix.net/rings_lg_orig.png?w=400
Downsampled with imagemagick: https://0x0.st/1-.png http://i.imgur.com/Nvl7tAm.png
Downsampled with imagemagick, gamma correct: https://0x0.st/1i.png http://i.imgur.com/Hrm4COb.png
Note how the luminance becomes square in the center (step back a bit if you can't see it), and also the edge pixels on the imgix version.
Then again, one wonders why not just use FreeImage or something?
Isn't running shaders intensive as it needs to be compiled on the fly and handed off to the GPU driver?
Then again, it's often cheaper to throw silicon at problems than people. If you have in-house expertise in Apple's graphics libraries, that might be cheaper than hiring someone who could write the whole thing to run under a lower-cost Linux solution.
Alternatively, OS X might give you automatic access to patent licenses for some of the more expensive image formats.
Have they ever blogged about why they've gone down this path?
From a pure hardware perspective, I would love to move this part of the service to Linux systems with GPUs. I spent some time evaluating this before we committed to the Mac Pro solution -- built some prototype hardware and did a cost analysis. It just wasn't the right move, because of the engineering cost for us. OS X's graphics pipeline is really strong, and we've built a lot of cool things with it. There is no analog whatsoever on Linux -- we would have to commit a lot of resources to re-build what we already have, and it would in the best scenario not be a customer-visible change. As a lean startup, we have to be ruthless with the work do: if it doesn't move the needle for our customers, it's probably not the right thing to do right now.
So instead, I've spent some time (and engaged with partners like Racklive) to get the Mac Pros to be as operationally acceptable as possible. This rack design and the chassis we designed go a long way towards achieving that goal. Airflow is taken care of, and the rack hits my power quota almost exactly (at full load). Cabling and networking and host layout follow our patterns from our conventional server racks. USB and HDMI ports on the front allow me to easily use a crash cart.
The lack of IPMI is my biggest operational headache. We have individual power outlet control and can install the OS over the network, so that's something at least.
The OS itself is also challenging. I'm not a fan of launchd. Finding legitimate information about how to do something on OS X is pretty tough, given that most of the discussions are focused around desktop users (who may be prone to pass on theories of how things work rather than facts). We've gotten it to a point where things work pretty well -- we disable a lot of services, run our apps out of circus, use the same config management system as on Linux, and so forth. We treat the Macs as individual worker units, so they're basically a GPU connected to a network card from the perspective of our stack.
This is the biggest nightmare about working with OS X, to me.
Any forum discussion you find on Macrumors or the Apple forums is hilariously misguided with pathetically bad "theories" on why something isn't working and how to fix it.
"Zap the PRAM!" can be found in any/every thread, and that's a mild example.
One project I worked on was where we needed to use proprietary software that only worked on OSX that would take a video, perform waveform analysis on the audio, and the output would be a properly timed closed captioned master with the text having been provided separately.
This was of course a small project, and only had a few Mac Minis rack mounted for the task, but I can easily see situations similar where you're tied to the platform for one reason or another.
If you don't have OS X in the cloud, then you're going to have to write your own raw image converter, and that means you can't sync with the OS X client native raw converter, complicating the workflow...
Except they needed to build and maintain that silicon.
I'm sure they've performed some kind of market analysis for this, but there's enough differences between OSX and Linux solutions that for people who use HPC solutions (a growing market) a cleaner path from OSX to HPC would be very helpful.
It is pretty frustrating. We've joked around about how Apple will probably announce a new Xserve at WWDC next month, now that we've done the work to get the Pros happy in production.
I don't really see them re-entering this space though. Apple already has a LOT of businesses that they are clearly bored with. iPods, the Thunderbolt Display, their mice, and so on. They seem to be unable to get engineering motivation behind "unsexy" products, which I definitely think a new Xserve would classify as.
Plus, just making it rack mountable wouldn't necessarily cover our use case. What if it didn't have GPUs, or couldn't fit the ones we wanted? A lot of server class GPUs can't fit in a 1U enclosure, they need 1.5U or 2U chassis for airflow and heatsinks and whatnot.
Buyers of rackmounts require a totally different kind of service. It's not just about the iron, it's a largely separate operation from the consumer PC business. You don't exactly take your Xserve to the Genuis bar...
There simply isn't enough demand for Xserves to make it worth the investment for Apple. (As far as I remember, many companies that bought the original Xserves phased them out again because Apple couldn't deliver that kind of service.)
One can certainly imagine Pixar or whoever having a data-centre of Macs, but at their scale, where they also write all the software for their rendering pipeline, they can easily make that software cross-platform such that developers can test-render on a Mac, then grid-render on a Linux farm without any friction.
I used to have a tape measure from Marathon that was marked out in U, but I haven't seen it in years. They were a pretty cool company at the time.
I personally felt it was a disgrace to see Apple logo on the apple's rack mount servers.
Considering how little rack mounted equipment is replaced versus consumer hardware, I can see why.
Yep, there are lots of other options out there. I considered at least 4 or 5 off-the-shelf ones before committing to designing and building our own.
In Sonnet's case, it is super expensive and no denser than this: http://mk1manufacturing.com/store/cart.php?m=product_detail&...
We're able to achieve twice that density, which put it right on target with where I wanted to be. 44 of 48 switch ports utilized, almost all CDU outlets utilized, and ~13kW out of 14kW utilized under load.
Neither of these are relevant in a rack-mounted environment with heavily customer-written backend/batch software with no user interface
I get that the Mac Pro is a beautiful object, but this isn't about the mac. It's about the rack, and none of these photos let me understand it in one shot.
None of these pictures really show how that is accomplished here. In fact many of them seem to be deliberately hiding that specific aspect.
I had originally intended there to be a totally disassembled chassis with an airflow overlay on top, but it turned into a lot of work. All of the chassis were already assembled by the time we took the pictures.
The high level view is that air is drawn in to the vent on the front right, which has a separate channel that all 4 Pros sit in. They are sealed in place, so the air has to pass into each Pro's air intake to go anywhere. The other side of the chassis is open to the back of the rack and holds each Pro's exhaust vent.
I'll go through the photos we took and see if there's something that would help to illustrate this better.
But usually you keep that to yourself! To me, this reads sorta like: "Well, it was really hard to find someone who knew how to build a replacement bridge across the creek. We were pressed for time, and Bob didn't know anything about bridges, but luckily, he used to be in the Air Force and we have a bunch of venture capital. ... So we bought a helicopter instead. We only cross a few times a year, so for now we're coming out ahead and it works out for us. Plus the pictures are nice..."
It is much more expensive, though a lot less engineering work, than buying some used Tesla's on ebay: http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m5...
or even brand new
The Tesla card does have a significant advantage in terms of double precision math, but that isn't the kind of workload we're doing. If we were to go with GPUs on Linux systems, the NVidia GRID card or AMD FirePro server cards are probably a better fit. Or maybe even NVidia Quadro or GTX, although they don't have the proper fan layout and there would be some tears shed over getting the power sockets cabled.
If you're a one person startup, then you do what you have to do to survive. Eventually you get to the point where free stuff actually costs you more than just paying for it in the first place.
We're also working on a third, which I think will be in the format of an interview with the Mac Pro chassis's designer.
A Mac Pro is 9.9 inches tall and 6.6 inches in diameter. 9.9 / 1.75 = 5.65 and 6.6 / 1.75 = 3.77 https://www.apple.com/mac-pro/specs/
If you look at how the airflow works on that shelf, I think you'll see why I don't have confidence in that solution. The air paths to each system seem to be based on wishful thinking.
We also didn't need to go that dense after considering each host's power draw at full load. I design towards a 208v/3ph/50a circuit on each rack, and 44 Mac Pros at full load (plus a switch) are about 13.5kW in my testing. So we would need to build for 60A circuits, or not completely fill the rack, to make the vertical orientation worthwhile.
The reality of the power budget makes the most sense really. There's no point in cramming extra units in if you're going to have to rewire for them. Systems engineering!
I realize it's not the Apple Way™ but considering just how bizarre and niche the current trash-can Mac Pro line is, it hardly seems more niche than that.
x0054: "You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes."
And place them horizontally. And without additional fans!
And surprisingly, if you read skuhn's answers here, for them it all still has sense, financially.
And also surprisingly, Apple says it's OK to use the Mac Pros horizontally:
https://support.apple.com/en-us/HT201379
Fascinating.
Physically, the Mac Pro itself is really densely constructed. Even with some empty space inside our Mac Pro chassis, the solution is effectively 1U per 2 GPUs. That's pretty dense, and it hits our power target for the current site design, so going denser would only lead to stranding space ahead of power (which leads to cost inefficiencies).
But, let's consider some hypothetical configs with list prices that I just looked up. Anyone can do this, and these are not reflective of my costs (you can always do better than list). In reality, I would do a lot more digging on the Linux side, but this is a reasonable config that is analogous in performance and fits into my server ecosystem.
I'm excluding costs that would exist either way: the rack itself, CDUs, top-of-rack switch, cabling, and integration labor are all identical or at least very similar. Density is very similar, so there's no appreciable difference in terms of amortized datacenter overhead.
Mac Pro config (4 systems in a 4U chassis):
- 4x Mac Pro ($4600)
- Intel E5-1650 v2
- 16GB RAM
- 256GB SSD
- 2 x D700
- Our custom chassis
Capex only: $0.70 per gflop
Linux config (4 systems in a 4U chassis):
- SuperMicro F627G2-FT+ ($4900)
- 4x Intel E5-2643 v2 - 1 CPU each ($1600)
- 8x 8GB DIMMs - 16GB each ($200)
- 8x 500GB 7200rpm (RAID1) HDD - 500GB RAID1 boot drive ($300)
- 8x AMD FirePro S9050 - dual GPU ($1650)
Capex only: $1.03/gflop
For comparison, I'll give EC2 pricing as well. It's a tad unfair, since we aren't including on-going maintenance and electricity for the Mac or Linux options -- but 3 years of power is also not nearly equal to the cost of a server. EC2's pricing becomes truly atrocious when you consider network costs -- there is simply no comparison between 95th percentile billing and per-byte billing. EC2:
- g2.2xlarge @ 3 year reserved pricing discount ($7410)
Instance operating cost only: $3.23/gflop
The Linux config for sure offers many more hardware options and greater flexibility -- and it also requires us to rewrite our imaging stack that is working out pretty well for us and our customers.I firmly believe that we've made a pragmatic and sensible choice for our image rendering platform today. imgix has a number of smart and talented people constantly evaluating and improving our platforms, and I'm confident we will keep making the right decisions in the future (regardless of how nicely the Mac Pro may photograph).
We toyed with open shelf type solutions that would let us mount the systems front-to-back, but as you noted, anything above 2 Mac Pros across won't fit in a 19" rack. We also thought about mounting 23" rails in our standard cabinet, but ultimately settled on this chassis and orientation.
One of our early design ideas: https://www.dropbox.com/s/15u19aivay4hfiu/2014-01-13%2017.14...
And to those questioning "Why would you use such expensive systems when commodity hardware is just as fast at half the price?" I would reply that the Mac Pro isn't all that expensive compared to most rack mount servers. If you're talking about a difference of $2000 per server, even across a full rack you're talking less than $100k depreciated over 5 years.
Though Apple is sorely lacking a datacenter-capable rack mount solution. I've always felt they should just partner with system builders like HP or SuperMicro to build a "supported" OS X (e.g. certified hardware / drivers, management backplane, etc.) configuration for the datacenter market. It's kind of against the Apple way, but if this is a market they remotely care about, channel sales is the way to go.
If they are GPU limited...
A full 4U rack of Mac Pros is 8 AMD Fire GPUs (6GB VRAM each), 256GB main RAM, 48 2.7GHz Xeon cores (using the 12-core option), and 4TB of SSD. 10G Ethernet via Thunderbolt2.
Let's set aside differences in GPU and processor performance; we're just looking at the base stats. All for about $36K USD, not including the rack itself.
An alternative is the SuperMicro 4027GR-TR:
http://www.supermicro.com/products/system/4U/4027/SYS-4027GR...
So, maxed out, you've got 8 Nvidia Tesla K80 cards (dual GPU), 1.5TB RAM, 28 2.6GHz Xeon cores, and a lot of storage (24 hot-swap bays). That's in a 4U rack too.
Call it about $13K USD for the server, and $5K per GPU. Plus a little storage, call it about $56K USD with 10G Ethernet.
The SuperMicro system is designed to be remotely managed. Each GPU has double the VRAM of the AMD Fire ones (12GB vs. 6GB).
I don't know the exact performance figures of the AMD Fire vs. the Kepler GK210, but I'm sure the Fire it isn't nearly as good. And you've got twice as many Nvidia chips on top of that.
At some point its going to get cheaper to re-write the software...
K80 gflop/s: 8740 2x FirePro D500 gflop/s: 3500
K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is $4000. So it's 2.5x the performance at easily 2x the cost if not more.
You're right that there is a cost advantage to going with commodity server hardware, but I don't think it's as great as most people think in this particular case. It's also far from free for us to do the necessary engineering work, and not just in terms of money. It would basically mean pressing pause on feature development at a crucial time in the company's life, and that just isn't the right move.
The fact they discontinued it shows that it's clearly not a market - customers didn't want it in enough volume to justify the product.
I mean I can think of lots of reasons to stick with Rails/whatever (and that's what I'd do), but I'm surprised it is quite so unheard of. You'd get much better performance. Skipping garbage collection with ARC would be awesome. Coding time is still pretty fast, and it's not as unsafe as C/C++.
Just a crazy idea for anyone about to start a mobile app company. :-)
1) Massive premium for compute
2) They're at the mercy of Apple, a single completely unpredictable vendor.
3) Apple changes it's form-factors to the latest "design" way to frequently
4) Apple sucks to manage in mass
Going on past history, I don't think they will have to worry about the Mac Pro being updated too often.
2) This isn't the 90s where Apple was at risk of folding and going away.
3) They actually don't other than phones. Go back to all their Pro desktop lines starting with the PowerMacs. The previous MacPro case lasted quite long and came from the PowerMac G5.
Consider Mac Rumor's lifespan of the various Mac Pro models: http://buyersguide.macrumors.com/#Mac_Pro
The previous form factor (silver tower) lived from August 2006 to December 2013. If we see that kind of longevity out of the black cylinder form factor, I'd be thrilled (although preferably with more internal updates). However, there's nothing stopping us from adapting our design to whatever new models come out.
We have current rack designs for Mac Minis and Mac Pros now, and we can add a third if the need arises.
No, but they are seriously unpredictable. Just ask anybody that built expensive workflows around Final Cut only to find out the new version wasn't backwards compatible with project files.
Nope, it's not ridiculously expensive. The GPUs in the Mac Pro are actually an exceptionally good value per gflop (when I last did a comparison a few months ago). GPUs that will work in servers are not cheap -- a comparable AMD FirePro S7000 is $1000, and the Mac Pro has two of them.
There's the cost of having these Mac Pro chassis fabricated, but they're passive hunks of metal with some cabling run. Nothing too expensive there, and economies of scale are on our side.
The Mac Pros are at least 5x more cost effective than Mac Minis (per gflop, total operating cost), and they're substantially more cost effective per gflop than doing something like EC2 G2 instances. My estimate is that moving to Linux servers would save us about 10-15% per gflop, but that could easily be eaten up by the engineering time needed to migrate.
They say "Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance". So they couldn't achieve the "same thing" in the sense of running their software on racked computers, because it won't run on PCs, and if you're thinking about expense you'd have to consider the cost of making the software run equivalently well on PCs.
So that's to say, that if there's an actual use for OS X at this scale, it's far less financially crazy than a lot of things that go on in data centers.
"Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance... Building on OS X technologies means we’re dependent on Apple hardware for this part of the service, but we aren’t necessarily limited to Mac Minis."
The better solution is to have a NetRestore server on the network, and configure the Macs to boot from a particular server with the bless and nvram commands. Then on the server, you control if the image gets served or not based on some external logic (in my case, an attribute in my machine database).
At the moment, NetRestore is running on an OS X Server machine hanging out on the network, but integrating it with our existing Linux netboot environment is on my to-do list.
This has myself and a colleague wondering what Apple run in their data centres. Can anybody hazard a guess? Is it Apple hardware with OSX? Is it custom/third-party hardware running *nix? I seem to remember somebody mentioning Azure not too long ago.
I think that things can be quite different between orgs though, with some adopting a more enterprise-y appliance setup (NetApp Filers, InfoBlox, etc.) and others building services more like an Internet company (Linux servers and open source based services).
Server hardware evaluation by how well it comes in in photos.
I think OS X has been the best all-round Desktop OS for many years now, but what does it give you as a server that a linux-based system can't, and that's worth the trouble of custom racks, vendor lock-in and high costs?
In fact, if you're working with OpenGL, OS X can be frustrating since it only ever supports an OpenGL version a few years behind the latest release - IMO one of the platform's biggest drawbacks.
Then again, I've seen some pretty strange errors on server machines doing GPU-heavy work on linux machines with Nvidia cards, and it's probably easier to get support on a standardised SW/HW system such as the Mac Pro...
I can imagine the dynamics of 4 machines scavenging air from a single chamber, with an opening on one end, will result in the machines nearer the warm aisle having to work harder to keep cool...
I also wonder what kind of ducting could be implemented to minimize this effect.
Anyway, a very cool project ending in what looks to be a fantastic end product. I wish I had the chance to work on something like this!
If heat or airflow did become a problem, we could add fans to the chassis (either in the intake tunnel or along the exhaust vent). The ideal solution is probably to also attach a chimney to the rear of the rack, but so far it hasn't been necessary.
I should have added, for anyone reading: you do! imgix is hiring, and if you don't see a job description that appeals to you, just reach out and let's see. I'm writing ops-type job descriptions right now.
The feature set I required is served by both equally, so it comes down to performance/ddos prevention/cost for me mostly. I am unlikely to modify what I have just done since it is working fine, but for the future would love to know if anyone has experience with this.
Vendor lock-in is a bitch.
Keep in mind also that 8 Pros in 7U = 48 in a 44U space. So it's a pretty similar density, but I don't think it is as ideal in terms of airflow. Instead, it's more ideal for working on the systems individually (such as in a colocation environment), but that isn't a particular concern of ours.
I'm really curious about any study/compassion between OS X's graphics frameworks vs. other open/closed source solutions available. How 'output quality' is measured? Is really that great and unique? I hardly think that simple image operations like cropping/blurring/masks implemented in OS X framework are significantly faster and with 'better quality' than the same algorithms implemented in Linux/Windows. Not mentioning that you can boost your computation using cuda/opencl on Linux practically seamlessly. But again, citation is needed here.