https://oracle2amar.wordpress.com/2010/07/09/what-are-latche...
"A latch is a type of a lock that can be very quickly acquired and freed."
That brings me a couple more questions:
1. May I infer then that the only benefit from partitioning the table (fully located on the same disk) that can not be achieved by indexes is that queries will wait less time for this kind of lock to be released?
2. May I assume while a table is only being read and not changed, there's no performance gain from partitioning a table (fully located on the same disk) that can not be achieved by indexes?
See below for many examples (I originally tried to split up the read and write benefits, but that usually isn't very meaningful since speeding up writes often speeds up reads indirectly (by enabling more indices to be maintained, releasing resources faster, leading to reduction in index size, or improved statistics).
> 2. May I assume while a table is only being read and not updated, there's no performance gain from partitioning a table (fully located on the same disk) that can not be achieved by indexes?
No. First, latch contention occurs even on fetch--latches are much lower level than the kinds of logical locks that get taken on updates and are generally required in order to enforce database invariants (in Postgres btree indices they live at the page level). Second, there are many other benefits.
An important one is separate maintenance of planner statistics. When a table has enough entries, Postgres's "most frequent values" list ends up not being able to fit all the rows with large frequencies (this can be tweaked up to storing 10,000 such buckets, but eventually even this is not enough if your table is very large). It then has to resort to its histogram, which normally only works with columns with a natural sort order. Since Postgres bases many important decisions during query planning on these statistics, you will often see order-of-magnitude improvements in query performance by improving table statistics. Note that composite indices do not keep separate statistics "per prefix" the way partitioning does (that is, an index on (a,b) does not track statistics of column b "within" column a); they just keep statistics on the individual covered columns or expressions. Not tracking cross-column correlations has a large impact in practice if your columns are not uniformly distributed throughout your table (and you have enough rows).
Another potential benefit (depending on your partition key) can be reduction in index size. For example, if you partition a table into a different segment for each tenant in your organization, you can use selective indices that don't include the tenant id. Note that you can already do this with partial indices, but Postgres does not use their statistics, which makes them much less useful, and large numbers of them can degrade planner performance (moreover, until recently, it didn't even use them for index-only scans, but I believe this was fixed in 9.6). Reduction in index size allows more indices (or more nodes of the index) to fit into memory (or even sometimes cache, for internal nodes), which can lead to dramatic speedups, especially if you can use an index-only scan. Smaller indices are also much faster to build, maintain, and scan.
The combined index size of indices in a partition is also slightly smaller (by a factor of around log(n) / log(n/p)) but that isn't really that significant (for 2^30 (over 1 billion) tuples, and 100 partitions, it is still just a 1.3x improvement, and can easily be overwhelmed by the size of the extra metadata for smaller numbers of tuples). For much the same reason, updating 100 100-entry histograms / most-frequent-value sets can be (slightly) faster than a single 10,000 entry one.
Another one is that if you are fetching more than one row at a time from a partition, keeping them (more) clustered on disk and in RAM (remember that fetching a page of RAM that's not in cache is actually quite slow!) means substantial savings in I/O and greatly increased likelihood of page buffer hits, which leads to order of magnitude savings. This is probably the biggest improvement from partitioning, and it applies to more complicated queries than point lookups (such as range queries and bitmap index scans). There are also some types of index (notably BRIN) that are most effective if your data distribution is such that you know you tend to have monotonically increasing keys over the entire table. In multitenant situations, these become much more useful if you can restrict them to a single partition, and allows you to again save index space.
There are other benefits to overall database performance (in the same vein as btree latching) from being able to address different tables in parallel, such as vacuuming being able to be performed separately on different tables (there are also some drawbacks, like operations that perform metadata scans [also VACUUM!] taking longer, but maybe this is improved in Postgres's new scheme--I'm not sure).
One that I don't see people talk about a lot, but is critically important for our usecases where I work, is that Postgres's implementation of SSI (serializable snapshot isolation), though it tries to use gap and row-level locking, will conservatively upgrade to page-level locks, and sometimes even table-level ones. If other updates are going on in the database at the same time, this can cause false conflicts, leading to many unnecessary aborts and retries. If data are partitioned and updates tend to be specialized to a partition, the chances of this happening are greatly minimized.
Finally, even though it contradicts your original premise: requiring an index to be used for every operation is asking for trouble. Making scans fast pays off in spades (especially if you have arbitrary custom filters in queries) and can free up your programmers to focus on things other than database performance :P A combination of intelligent partitioning (where appropriate), some understanding of Postgres's internals, and very little else can allow servicing surprisingly large numbers of users with varied query requirements, without creating many indices that aren't directly implied by the data model.
Hopefully, this gives you some sense of why people are excited for this feature :)