I've seen plenty of projects described as open source when the firmware is open source and the hardware is closed but this is the first time I've seen one where the schematics and board layout are open but the firmware is closed. Note that the hardware itself is quite simple, the smart stuff happens in three modules: GNSS receiver, precision clock and FPGA. To me, the contents of the FPGA source code are the only interesting part of this project. Additionally, you won't be able to meaningfully modify or reuse this project without editing the source code.
The hardware module these FPGA binaries seem to be compiled for is described as an AC7100B, the source given for these is a defunct ebay link [3] [4] :-O This project uses two and half grand's worth of atomic clock and the heart of it runs on a module that fell off the back of a lorry!?
[1] https://github.com/opencomputeproject/Time-Appliance-Project... [2] https://www.nettimelogic.com/clock-products.php [3] https://github.com/opencomputeproject/Time-Appliance-Project... [4] https://www.ebay.com/itm/XINLINX-A7-FPGA-Development-board-A...
Edit: I realise this comes across as quite negative. This looks like a neat project, one I would actually use, but only if it were meaningfully open source.
However, to modify the HDL, you'd need a $3,000 license for Xilinx Vivado, which is pretty much a nonstarter for amateurs.
[1]: https://store.digilentinc.com/arty-a7-artix-7-fpga-developme...
I think the XC7A100T is supported by the free (and limited) edition of Vivado [1]. Of course, the confusion about what you can and can't do with the HDL is in itself a barrier to open source and hobby work. This is true even when the work is theoretically possible and everyone has the best of intentions. <sigh>
[1] https://www.xilinx.com/products/design-tools/vivado/vivado-m...
where if you wanted a tracker that wasn't a full controller you had to build it yourself. This involved a paid workshop in Seattle to get access to the dev kit and learn their tooling. Hardware was open source, but firmware was closed. If you wanted firmware changes you had to pay Synapse consulting fees for them to do it for you, and the updated firmware would then be made available free of charge to anyone else who paid for the workshop.
The only way to really open source a hardware project of this magnitude is to set open source as a constraint from the start. If you don’t, you inevitably end up with someone licensing some code somewhere from a vendor that can’t be open sourced. It gets integrated deeply enough that the project can’t exist without that piece and it’s too late to rewrite around it so the open source commitment goes out the window.
If I had to guess, it sounds like Valve tried to work around the situation with the workshop where participants entered into some contract as part of the workshop.
And if I had to guess about the Facebook situation, it was probably decided to open source the project after the fact. I’m guessing they wrote the website copy while someone else went off to try to secure open source permission for all components involved and they never reconciled the two efforts, hence the weird dead eBay links and missing files.
However, I think they've used code from NetTimeLogic under a licence [1][2] that precludes distributing the source:
> 3.2 Distribution Rights. LICENSEE may reproduce and distribute the Licensed Materials, solely in binary form that operates in LICENSEE’s system-level hardware products.
This is a perfectly reasonable way of developing a system but it is sadly incompatible with sharing your work as open source.
[1] https://www.nettimelogic.com/licensing.php
Edit [2] IANAL, this is not legal advice etc.
I've had a GPS clock going for several years at this point, and without an atomic clock or really any fanciness (just LinuxPPS and Chrony), I see about +/- 380ns, which is pretty good. NTP to the Internet gives me jitter in the range of about 20ms-70ms, about 5 orders of magnitude worse.
(The version a few iterations ago looked like this: https://github.com/jrockway/beaglebone-gps-clock. But I now have a uBlox multi-constellation GPS, which is much more accurate with my limited view of the sky from my Brooklyn apartment. And I 3D printed the case, so it actually looks presentable instead of like some crazed madman that attacked a plastic case with a hacksaw -- which is exactly how I made the first case. As for the DS3231 RTC that I added... that seems to be stable within about 1.5us, which is pretty impressive. I tuned it a little bit with the trim register, though.)
My takeaway from this article is that the project seems quite compelling, but a lot of cost is added to support PTP. I see why Facebook needs that, but to sync time precisely throughout my lab, all I need is a coaxial cable with 10MHz on it. I don't mind if my workstation is 20ms off UTC.
https://www.lacrossetechnology.com/products/404-1235ua-ss
https://smile.amazon.com/gp/product/B01CCHXTE2/
In the Zoom Age, ambiently glanceable accurate (for humans) time is super useful, as is the temporal pie chart of time left vs. how much I have to care how much time is left illustrated by analog clock faces.
With the Beaglebone, it's theoretically possible to drive the oscillator from a calibrated clock source (use the PPS pulse to discipline an oscillator). I was too lazy to build such a circuit and observe the effects. Some uBlox timing GPSes let you output a suitable signal (you can adjust the frequency in software), but the adjustability range when I last checked wasn't enough to make it work for the Beaglebone. Once you have the MCU running at a well-defined and stable frequency, I think it's possible to give Linux extremely accurate data, or at least know which clock cycle was the start of a second. I don't actually know how much any of this will help, and it's honestly not within my capability to measure.
For me, the outcome of my GPS clock project is that I never have to set my clock, and it looks really good. Running "chronyc sources" is always pleasurable as well, but I probably wouldn't notice a clock that's 20ms off unless I was comparing it against WWVB or something.
This is one of the reasons for OpenNTPD's existence. They wanted something that was simple and a ~50ms deviation was part of the design goals/trade off for this.
I built something similar (although I didn't use a CSAC[1] since it was too expensive but I did have a nice OXCO which is pretty stable. I use a Raspberry Pi to distribute plain old TOD via NTP to my local network, but the 10 MHz output I send to my bench where I can use it for radio projects. It is actually a better clock source (frequency stability wise) than the precision time source in my HP spectrum analyzer but it has worse phase noise (jitter) because it does get corrected by up to +/- 20nS periodically.
[1] Chip Scale Atomic Clock - Microsemi -> Microchip https://www.microsemi.com/product-directory/clocks-frequency...
https://github.com/opencomputeproject/Time-Appliance-Project...
They do include FPGA bit streams, but it's disingenuous to claim a fully open source release.
Uhm, no, the graph clearly shows that the offset is varying between 0-2 microseconds, which is 0 - 2,000 nanoseconds.
2,000 nanoseconds is not "practically zero". PTP precision is supposed to be in the 10ns range.
Also, how did they get 40us precision on (what I assume to be) a normal kernel on a normal x86 box? I would have assumed that in a datacenter environment, with traffic levels being random, the jitter introduced in the kernel networking stack alone would be in the hundreds of microseconds at least.
It timestamps packets at the network card - independently of any kernel timing.
Also, this isn't exclusive to Intel: most modern (server) NICs at least support TCXO-based timings, and some built for precision uses OCXO for even better precision.
My experience was GPS-backed network clocks were surprisingly cheap - but PTP-aware network switches were surprisingly expensive.
And though I thought I could cut the cost of a grandmaster clock from $2000 down to $200, I figured there'd be no point if I then have to connect it to a $20,000 network switch.
I wonder how Facebook are connecting their precise new time servers to their regular servers?
There is a big push in the Industrial automation world to get TSN/PTP deployed more widely and perhaps universally. The idea is to eliminate the dichotomy of real time vs non real time Ethernet networks in control systems from the plant network down to the machine level.
This way you can send EtherCAT frames at equidistant intervals to servo drives in the microsecond range while casually streaming SPC data or Netflix over the same wire.
This is also in conjunction with the newer single pair Ethernet tech that the Automotive world is also interested in.
For anyone wondering:
> In addition to the more computer-oriented two and four-pair variants, the 100BASE-T1 and 1000BASE-T1 single-pair Ethernet PHYs are intended for automotive applications[17] or as optional data channels in other interconnect applications.[18] The single pair operates at full duplex and has a maximum reach of 15 m or 49 ft (100BASE-T1, 1000BASE-T1 link segment type A) or up to 40 m or 130 ft (1000BASE-T1 link segment type B) with up to four in-line connectors. Both PHYs require a balanced twisted pair with an impedance of 100 Ω. The cable must be capable of transmitting 600 MHz for 1000BASE-T1 and 66 MHz for 100BASE-T1.
> Similar to PoE, Power over Data Lines (PoDL) can provide up to 50 W to a device.[19]
* https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair#Sin...
> The IEEE 802.3bu-2016[12] amendment introduced single-pair Power over Data Lines (PoDL) for the single-pair Ethernet standards 100BASE-T1 and 1000BASE-T1 intended for automotive and industrial applications.[13] On the two-pair or four-pair standards, power is transmitted only between pairs, so that within each pair there is no voltage present other than that representing the transmitted data. With single-pair Ethernet, power is transmitted in parallel to the data. PoDL defines 10 power classes, ranging from .5 to 50 W (at PD).
* https://en.wikipedia.org/wiki/Power_over_Ethernet#PoDL
100BASE-T1 is IEEE 802.3bw-2015, 1000BASE-T1 is IEEE 802.3bp-2016. 2.5 Gb/s, 5 Gb/s, and 10 Gb/s over a single pair is 802.3ch-2020: the focus of these is in the embedded automotive space.
* https://blog.siemon.com/standards/ieee-p802-3ch-multi-gig-au...
There is a possibility to use PTP as a transport for NTP to take advantage of PTP-specific hardware timestamping. It could also process the correction field, but it seems the switches typically don't support unicast PTP.
The large asymmetry and banding of NTP in the test with Calnex Sentinel suggests it doesn't support the interleaved mode. NTP with hardware timestamping should normally be much more stable and symmetric, closer to PTP.
Doing a quick search, the documentation for Cisco, Arista, and Juniper all mention unicast PTP, so it may be the feature is becoming more prevalent.
See also "Enterprise Profile for the Precision Time Protocol With Mixed Multicast and Unicast Messages":
* https://datatracker.ietf.org/doc/html/draft-ietf-tictoc-ptp-...
https://github.com/facebook/fboss/blob/master/fboss/agent/hw...
They have O(thousands) of switches (source: https://engineering.fb.com/2016/10/18/data-center-engineerin...) so IIRC they get the per-switch cost down by doing a custom design to reduce BOM. I believe the specs and designs are available under the Open Compute project here (search for "Facebook"): https://www.opencompute.org/wiki/Networking/SpecsAndDesigns
I could have used an in-between time unit, like hundredths of a millisecond, but that would have created embarrassingly long variable names.
What's the use case for microsecond-precise timekeeping, that is of interest to (i.e., benefits) FB?
But, by no mean, does it mean we use special timing - but now that I say that, I realize I have no clue where our time signal comes from, clearly it wouldn't be the internet because our trading servers aren't exactly directly connected there, we must have an internal ntp sever o_O
Imagine super high speed, high precision robotics, maybe a synthetic fiber layup machine or an exotic milling machine, that moves at 10m/s. If you have 1us of precision and accuracy then you can send movement commands that are precise down to 10um.
Invented? These cards have been around. e.g. Meinberg makes a variety of expansion cards which use various time keeping sources such as WWVB, IRIG-B, GNSS, etc.
Its an accurate and stable clock source. which is an important distinction. The Meinberg PCI cards appear not to have a local clock source thats well disciplined (ie a temperature compensated oscillator)
Having an accurate time source that is updated once every second (typically) with a 1pps GPS is grand for most things. But if you need a stable clock as well, then you want a tiny atomic clock onboard as well. This is so you can run a clock source for an entire datacenter, or local section.
FB seems to be claiming to have invented a PCI card that makes a commodity server into a Stratum-1 source. Meinburg sells that too, but in proprietary appliance form.
This prevented someone from faking the time signal.
Buddy of mine owns timestamp.com and we were building a service where you could hash/timestamp emails (we would take over the MX for a domain and become a forwarding service). All of this was long before we had blockchains.
Sadly, it never really took off so we abandoned the idea (what is up there now is currently another half attempt at the idea).
Others have tried, but it isn't really a huge demand despite the apparent need.
I would caution people against using the Mellanox/NVIDIA network interface cards in anything they care about. The driver license is proprietary and you have to build it for your kernel as a DKMS module in a semi manual process. Makes future system updates a real bother. I realize this probably doesn't apply so much to FB internally since they build and maintain their own distro in house.
Stuff that's just a couple of years old won't even build the driver for current debian bullseye. Have to run buster? No thanks.
As with so many other things NVIDIA the root cause of this is their absurd licensing approach to open source software and drivers for their hardware.
Hello,
Where did you get that from? The drivers for NVIDIA networking cards are in the upstream kernel. You have the option of using newer versions from DKMS if the built-in version for your kernel is too old.
You can see the release notes for each version as present on the upstream kernel at https://docs.mellanox.com/display/kernelupstreamv512/Linux+K... for Linux 5.12 for example.
> Makes future system updates a real bother.
Sounds like their GPU drivers too.
It comes from solarflare who have a long pedigree of low-latency smartnics. They used to supply Cloudflare, and also supply like 50% of fintechs/financial markets.
You can also just use openonload to accelerate your programs. In this case just doing straight linux socket programming, which can be accelerated without dpdk. Or just use the open source linux net driver if necessary.
You could look at the man pages for their very expensive 10GbE server NICs in 2003-2004 and see the @intel.com email addresess of the persons who wrote them.
First introduced in 2013, the Seiko Space Link GPS clock has proven to be a global success. It is the world's first clock for home or office use that receives a time signal from a GPS satellite, thus delivering atomic clock precision. It measures time to within one second every 100,000 years. Equally importantly, it can be used anywhere in the world and can be installed in almost any situation, so long as it is around 15 metres or less from the open sky. The technology is beautifully practical. 4 times every day, the clock receives a time signal from the GPS satellite network and, when conditions are ideal, adjusts automatically in around 10 seconds of receiving the signal. Between these signal receptions, a high accuracy quartz movement keeps the clock on time.
Wherever you may need it, Seiko Space Link is there, silently marking the passage of time with an elegance and precision that is redefining the world’s perception of what a clock can be. A new standard of excellence has arrived.
https://www.cockroachlabs.com/blog/living-without-atomic-clo...
I can understand why a stock broker or trading firm requires PTP to enable precise date-stamps for auditing/validating trades.
I don't see how having a time granularity on the order of picoseconds is needed for a data center.
It should explain how clock uncertainty relates to throughput of causally-related transactions.
Now, does it require a Facebook account with my real name in order to use? /s
I figure units doing open hardware could be like when the old SNL sketch about AT&T -- having the slogan "We don't care. We don't have to. We're the phone company." -- was of the same company that hosted Bell Labs.
These timing cards are low volume products so they have a bad habit of going out of support after a few years and then the kernel level drivers don't get updated and there is hardly any community. Having open hardware that multiple vendors can support makes them easier to get added to the mainline kernel.
Surely that's a lot of effort that anybody with in a datacentres with decent connectivity could replicate with a Chrony instance ?
100 microseconds is 0.0001 seconds, right ?
Seems to be something Chrony can do with its eyes closed:
System time : 0.000003297 seconds slow of NTP time Last offset : +0.000000549 seconds
There are lots of great NTP servers out there hosted by trusted parties such as national time labs. Most people don't need fancy GPS hardware (and the expense of paying someone for roof access to put your antenna !). If you've got good software and good connectivity you can replicate 99.999% of the same thing.
However, isn't it smarter to try to build an infrastructure/architecture that can deal with somewhat less precise time information (milliseconds instead of nanoseconds)?
> Microchip’s next-generation MAC-SA5X miniaturized rubidium atomic clock produces a stable time and frequency reference that maintains a high degree of synchronization to a reference clock, such as a GNSS-derived signal, despite static g-forces or other factors. Its combination of low monthly drift rate, short-term stability and stability during temperature changes allow the device to maintain precise frequency and timing requirements during extended periods of holdover during GNSS outages or for applications where large rack-mount clocks are not possible.
* https://www.microsemi.com/product-directory/embedded-clocks-...
Article by some folks from the manufacturer giving details on the capabilities (with graphs and such):
* https://www.gpsworld.com/new-miniature-atomic-clock-aids-pos...
Generally: if accurate positioning, navigation, and timing (PNT)—especially timing—is important in your infrastructure, then you need to plan for GNSS outages.
> In the event of the GNSS signal loss, we need to make sure the time drift (aka holdover) of the atomic-backed Time Card stays within 1 microsecond per 24 hours. Here is a graph showing the holdover of the atomic clock (SA.53s) over a 24-hour interval. As you can see, the PPS drift stays within 300 nanoseconds, which is within the atomic clock spec.
* Article.
A lot of the demand for high-precision clocks is for cell phone base stations, where there's no guarantee there'll be someone on hand to make repairs promptly.
* They happen occasionally, of course.
https://www.ofcom.org.uk/spectrum/information/gps-jamming-ex...
https://www.navcen.uscg.gov/?pageName=gpsServiceInterruption...
That paper is in context of power grid equipment, but the GPS attack generalizes.
Folks often think "Oh, +/- 50ns, 20ns RMS, easy to filter...", but that's totally wrong.
The GPS will report -30ns from stable for minutes on end, then slew to +10ns, then -5ns, etc. Any high-precision oscillator (such as for radar) that's being jerked around like that isn't going to be as stable as high performance needs.
Even for just handoff of handsets at 2.2-2.3GHz, having the radio network (aka cell towers) all locked to an oven-controlled oscillator that was aligned-to, but far smoother-than, GPS, made a huge difference.
Now, improvements to GPS/GNSS that track 12 satellites instead of 6, and across multiple constellations, can result in more stable radio-based time. But then you get into urban canyons, and can only see 5 instead of 12, and you're right back into the jumpy situation.
GNSS PPS is more jittery but does not drift. The MAC has 1000x less jitter but drifts. You can cleverly combine them with statistics and magic to get the best of both worlds.
Also I imagine they are using a bog standard Kalman filter[1].
having the MAC means that the 10mhz reference signal is going to almost always be 10mhz, and in any given second contain 10million pulses[1].
However if you're just relying on GPS's PPS and a standard quartz oscillator, then the 10mhz reference is going to wobble about.
[1] this isn't really true, but illustrates the point of stability.
Then, you can use that disciplined primary clock to provide actual time for timestamping, etc.
I searched around on Wikipedia for it and couldn't find it. I might recommend some edits when I have the time.
https://github.com/facebookarchive/Flicks
ninjaed :(
I would imagine it's archived because there's nothing else to really be done. It's basically a blog post. The header can apparently be replaced with three lines.
Doesn't sound scary, but not a lot of research labs and data centers are going to want to build their expansion cards from parts, even given free access to instructions. I understand if there isn't enough of a market for this to make it a viable commercial product, but when you have more money than God anyway, what's the reason not to make a few extra and donate or sell them to the small number of users who can use it?
> Where can I get one?
> […] we are currently working with several suppliers and will have their contact info soon available to allow you to puchase an out-of-the-box ready Time Card.
Growing up in Boulder, THE atomic clock (NIST) was touted as a modern marvel (circa early 90’s childhood), it’s incredible to think that similar performance is now available on a pci-e card! Any relation to DARPA’s “making progress” announcement in August 2018 [1]?
Interesting.. the infrastructure must therefore have a diameter of at most 30 km, thanks to the speed of light
Missed opportunity to call it a “time machine”.
Or, I guess, more prosaically, a “clock”.
There's reasonably priced RTK hardware out there now like SwiftNav. Another option is Ublox modules with open source RTKLIB.
Of course they come out with this now, after I've got a few antennas up on the roof, half a dozen GPS-disciplined OCXOs providing 10 MHz and PPS signals throughout the house, which are being fed into SolarFlare PTP NICs, and sync'd throughout the network using PTP.
Apparently my workstations' offsets (RMS) are currently 17 and 21 nanoseconds, though, so I suppose there's still room for improvement.
Overview: https://hackaday.com/2021/07/25/portable-gps-time-server-pow...
Developer story: https://www.linkedin.com/pulse/iot-maker-tale-stratum-1-time...
Repo: https://github.com/Montecri/GPSTimeServer
A simpler version from the comments: https://enginemonitor.blogspot.com/2020/09/gps-module.html?m...
Why bother getting a nanosecond-level accurate time signal if you're going to feed it to an ESP8266 to distribute over Wi-Fi (or, in other words, reducing your accuracy to ~10 milliseconds or so)?
- Facebook engineers have built and open-sourced an Open Compute Time Appliance, an important component of the modern timing infrastructure.
- To make this possible, we came up with the Time Card — a PCI Express (PCIe) card that can turn almost any commodity server into a time appliance.
- With the help of the OCP community, we established the Open Compute Time Appliance Project [1] and open-sourced every aspect of the Open Time Server.[2]
1. https://www.opencompute.org/projects/time-appliances-project...
2. https://github.com/opencomputeproject/Time-Appliance-Project...