Ideas for startups
Here are some ideas for startups. Some of these ideas might be more suitable for non-profits, international organisations, or governments. They're all big ideas, and might take a full ecosystem of startups and other players to make them happen in any meaningful way. Most have some proof of concepts floating around, but (as of this writing) none have widely used solutions.
Feel free to steal these ideas, and just make them happen. Many of these ideas come from conversations I had with others. Ideas are easy. Implementation is hard.
Distributed hosting
The internet is a distributed network. The world-wide web is distributed as a whole. Some protocols like email are, too. However, individual websites are not distributed, and some have become so dominant that individual companies or organisations have become very powerful. They are also single points of failures.
A distributed hosting protocol could potentially solve these two problems:
- No single owner: distributed ownership makes websites more resilient against the woes of a single company/organisation/government, acquisitions/mergers/bankruptcies, and misuse of data. Privacy could be improved with the ability to host locally (yourself or a trusted intermediair). The compute power - hosting costs - would also be divided. Of course, the lack of clear ownership needs to be replaced by some credible way of decision making and change management.
- No single point of failure: distributing websites over many nodes makes them more resilient to attacks by criminals or oppressive governments. Here the challenge is the distribution of information, in an eventually consistent way, and without issues in authority and authenticity.
Distributed hosting might be relatively easy for static content, but is much harder to solve for applications that involve:
- dynamic content (either fast-changing or personalised)
- centralised processing (such as ML predictions like medical diagnoses or voice assistant models or recommender systems)
- authorisation & trust & permission models
- pre-computed large-scale data flows (search index, retail catalogue, "trending topics")
Distributed hosted content should also be reliable (high up-time), performant (low-latency), and scalable, which are three reasons that hosting actually moves the other way, to a small number of high-performant highly available centralised cloud providers.
Lastly, a big challenge with distributed protocols is the (in)ability to quickly update and adapt. Consensus is needed among implementers and users. Many distributed protocols are slow (or impossible) to change once they have become popular. It might be possible to achieve some of the benefits of distributed hosting and protocols (privacy, security, forkability) in open-source centralised systems.
Think distributed social networks, distributed search, distributed encyclopedias, distributed code hosting, and so on.
Approaches that touch upon this idea: torrent networks / p2p filesystems; "edge servers" at everyone's home or ISP, with standardised apps in containers; and blockchains (distributed consensus, and some distributed computing).
Distributed network
The internet network has a distributed element in it, but the end nodes (the users) are connected to just one endpoint. The backbone of the internet is very reliable, but also relatively centralised and prone to state or ISP intervention (censorship) and global outages, or natural disasters. As we're relying on the internet for more and more services, this can cause issues.
A true peer to peer mesh network would make citizens less reliable on central infrastructure. Such a network will likely be significantly slower, and not very reliable, so it would be an addition to the existing internet. I like to think back to how I used walkie talkies to communicate with family while abroad on vacation. The mobile phone network worked as well, but calling back to your home country and back, all for a hefty fee, seems overkill when you can have a direct radio link. Can we have cheap mesh routers, backed by rechargeable batteries, connected together as a second-tier mesh backbone across cities?
Funding for essential web components
We have started to rely on large websites and software products almost as if they were common infrastructure. A couple of different categories can be distinguished:
- Open-source projects: high impact, distributed development; like Linux, git, Firefox and Chromium, programming languages (Python, C++, Java) and tools, and many others
- Non-profit organisations: high impact organisations leading high impact projects; like the Wikimedia Foundation (Wikipedia), Mozilla (Firefox), and the Apache Software Foundation (Apache HTTP, Hadoop, Spark, OpenOffice)
- Commercial platforms: high impact, general utility tools, that are struggling to find a business model, and might be better off as a publicly funded non-commercial product, cutting expenses on finding a business model. Think Twitter (public message broadcast, simple, limited innovation), Github and Gitlab (open-source software hosting; git itself is already open-source and distributed; Gitlab is partly open-source), or StackOverflow and Quora (Q/A websites)
This is a large mix of projects, each with their own challenges. The bottom line is that there are many high impact projects that are underfunded, understaffed, that rely on volunteering work, or have a mismatch between commercial interests and public importance. Many of these are run or built by a surprisingly small number of people, and are therefore relatively cheap to fund given their importance.
Various funding models exist, but reaching sustainable funding stays a challenge.
- Advertisements are popular across the board, but many users find them irritating (and have ad-blockers installed), or discard them for other reasons.
- Donations sound great, but are rarely enough - even Wikipedia need weeks of begging with large donation banners to make meets end, despite the majority of content being written by volunteers.
- Other projects rely on corporate sponsorship - like Mozilla's reliance on a default search box deal with Google. Some companies employ engineers to work on certain open-source projects, often used by and thus benefiting the company. Many open-source projects have no funding at all, and depend on volunteers.
- One might think about micropayments (see below): extremely easy donations, or configurable automatic donations based on usage (usage of the like button, time spent, tweets sent, news articles read). Simplicity is key, and it needs to be integrated, international, and ideally free to use.
- Some kind of small internet tax could also work, such as an ISP contribution per connection, a per-device contribution (like the "tv licenses" in some countries), or funded from regular national taxes or international funds (such as EU funding). The scale of the internet makes this feasible: with just $1/€1 per Western internet user per year - thus $1BN/year - we can already fund 10 organisations of the size of the Wikimedia Foundation (expenses $90M/year). Compare this with the TV license currently implemented in the UK: each TV contribution costs $194/year; with 1BN users that would add up to fund close to 2000 organisations like the Wikimedia Foundation, or many more small ones. What about a voluntary $1/€1 a month Internet Support Subscription, possibly implemented as opt-out checkbox when subscribing to Netflix/Spotify/yet-another-service? However, an additional headache of central funding models like these is that they need a fair and effective mechanism to decide on fund eligibility and distribution.
Online payments & Micropayments
Online payments are cumbersome. Most websites have long forms with Credit Card details to fill out. For small payments (donations or small purchases) this is much to slow, and insecure - the Credit Card system has no adequate permission model, no one-time authorization tokens, no maximum per business, no restricted time window. Some countries have slightly more secure and faster payment options, often with a login (which can be saved in a password manager) plus 2 factor confirmation (tab on phone), but for small amounts they are still inconvenient. Some internet companies provide similar checkouts (Paypal, Google Wallet, Amazon Pay). All of them have transaction fees, which hike up when they include currency conversions, making them unsuitable for micropayments.
It should be super easy to transfer money in a secure matter, across currencies. One click should be enough - maybe 2 factor authentication for larger amounts. I'm open to both centralised approaches (with middle man), or distributed ones (blockchains). This could be useful for:
- Purchases: from small to big purchases (things, food, one-time services); a secure system that supports small payments could boost small online stores (no worries about leaving credit card details)
- Donations: tiny donations could be an alternative for the advertisement model (the omnipresent funding model), and enable (small) content creators to get paid for individual content pieces, such as news articles, encyclopedia pages, videos, music, online courses, open-source software. Tiny, free, secure (peer2peer?) donations could be the solution to troubled business models (journalism/entertainment/education), and give rise to completely new ones.
- Subscriptions: besides regular subscriptions, one could imagine new peer2peer payment models based on page visits or number of media plays, and cutting the platform fees in between producer and consumer
Physical distribution protocol / universal stuff moving network
The internet makes it easy to transport information, regardless of the type of information. The applications differ, but the transportation is handled by a common protocol. We don't have a physical equivalent for TCP/IP. There is a patchwork of national postal services, package delivery firms, food delivery companies, specialised transport companies, container shipping, mailboxes, shared lockers or collection points, dropping points, delivery address conventions, personal transportation options, and much more. The growing number of company delivery vans is a sign of trouble. It's a mess.
I want a universal system that can handle any type of delivery: from Amazon (packages) to Uber (people), from food to trash. The shipping container standard is a great start for large-scale shipping, but needs to be extended to your doorstep. To be clear, we can have a layered transportation protocol, and the underlying transportation mechanism can be abstracted away as long as there is agreement on the middle layer (defining the payload, payload attributes, and the routing properties).
- mail (classic letters, sensitive information, magazines/newspapers)
- packages (online purchases) in any size
- groceries (cooled, frozen) + dinner/lunch (warm and speedy)
- parts/inventory (large/corporate)
- content of houses (moving)
- people
- garbage
- continuous fluids, like utilities? (packages with water, gas, sewage)
all across country borders, from any address to any other, with various speeds (from minutes to weeks).
The transportation layer could include some network of self-driving vehicles (cars, drones, driving robots), or an underground tube system (for things up to a certain size, from packages to warm meals to recyclable trash payloads, all optimally routed through a mesh network of tubes), or both. For the payload, some ideas to evaluate may include standardised box/container sizes, standardised delivery lockers in every street, temperature control, expected delivery time and prioritisation (and expiry dates), universal and live tracking plus clear insurance policies, and automatic source and destination verification.
If you think about it, it's funny that we have developed so many different systems just to move things around. In some cases, there might be good reasons for specialisation, but any standardisation across different categories might give tremendous improvements in efficiency and reliability, even if we just cover some of the above.
Communication protocol(s) & productivity tools
Email is the standard messaging backbone of the internet. Unfortunately, its usage has far outgrown the protocol. Email has become more like a todo system than a messaging protocol, and everyone can put items on your todo. The noise ratio of the average inbox is alarmingly high, but at the same time messages are expected to be answered quickly, especially at work. There is no effective way to control incoming email at the protocol level, such as email expiry dates, deadlines, access control, threads and conversation splits, reminders, or automatic processing like extracting action points or scheduling events.
The curious mix of messaging system, todo list, reference archive, and identity verification have made email an addictive Skinner box, but ineffective at any of these applications. Email filters and labels only get you so far.
We need better tools for productivity. We need to efficiently handle:
- messaging (urgent or slow; 1:1 or broadcast or in between)
- todo lists (actionable items)
- reference materials (documents, FYI's, mailing lists)
- calendar (appointments & time management)
For each of these categories, I have multiple applications - and different ones for work and private - and generally they are incompatible with each other. This is a clear sign that solid protocols and standards are missing. Other signs we're in trouble: emailing yourself; auto-replies; copying calendar items between calendar systems; or searching for a message in three different systems before you (don't) find it back.
The fact that so many companies are building tools in this area hints at the difficulty of this problem. An overload of information, often in free-form text, makes structured processing a challenge. Enforceable protocols might add some control to the chaos. Client-side tools might help organise more. Automatic (AI?) agents might pre-process incoming stuff, and partly automate handling them.
Subscribe to anything
A way to subscribe to results of, or updates about, anything: news stories, blogs, physical places (building constructions, events inside a building, accidents), events on posters, life events from people, new work from authors and artists, etc. Anything has context, but the links between events are currently not modelled very well, and certainly not across platforms.
A proof of concept might look like an app which uses QR codes (places) and localisation information (search the internet for updates?), plus a browser plugin (search for RSS feeds, search for email list signups, otherwise scrape pages daily). But a smooth experience would integrate deeper with underlying systems, and have a better understanding of the links between physical locations and online sources.
Bonus: more ideas from Paul Graham