I was looking at how people are thinking between GUI and datascience and came across this thread - https://news.ycombinator.com/item?id=16845666. In the past I have read GUI is basically a reification of command-line. Do we still think this way. Just what do we need as GUI for datascience.
A lot of innovation happens in the big data (data that needs distributed compute - spark etc.) like ability to blend data across sources, schemaless / schema on the fly, deploy analytical models to production etc. Is there a case for similar innovations in the small to medium data (working with ~10M dataset) blended across data sources, simple analytical models and such ? What percentage of usecases are in the bigdata realm vs. small/medium data.
One of the top magnet school in USA, TJHSS wants to remove merit based admissions based on aptitude test and wants to replace it instead with lottery system. https://www.fcps.edu/news/superintendent-presents-recommendations-improve-diversity-tjhsst-establishing-merit-lottery