Frontier Ideas
Thinking about the frontier is a lonesome thing. Perhaps it should be so one can have fresh takes. But this can be quite psychologically and emotionally taxing so it is useful to remind oneself that this aspect is not to be unexpected. Lonesomeness is a feature, not a bug.
Working on the frontier (as opposed to thinking of it) is less lonesome. Or it can be. Sometimes, it may even become social.
There are many mechanisms for effectively socialising these frontier ideas. Venture capital is an obvious one that has gotten quite good at getting groups together rapidly to work on the frontier. But the idea must be a fairly well-formulated one—I suspect it helps make the risk feel like less of a gamble than a totally blue-skies idea. More on socialising ideas later.
Blue-skies vs The Frontier
There are differences between blue-skies ideas and frontier ideas. Blue-sky ideas traditionally mean unconstrained, speculative thinking—no immediate practical application, high risk, exploratory. Think basic research, “what if” questions, long-term bets. For example, O’Neill Cylinders and Dyson Spheres; everyone wants them but there is little detail on how to build them. The efforts to answer those questions lack concrete engineering pathways with current technology - we don’t know how to get there from here.
On the other hand, frontier ideas are at the edge of current knowledge/practice—they’re novel and unexplored (or under-explored), but there’s often more structure to them. Rather than ignoring boundaries entirely, they push on existing ones. For example, we can develop some credible sounding plans towards building 100-person von Braun wheels, which are a step change from existing space stations. Such ideas have concrete engineering pathways even if challenging - we can map a credible route from current capabilities to the goal.
So, frontier ideas can be technically rigorous, grounded in real engineering constraints, cite existing research, and might even present some actual calculations. In theory, they should be publishable in academic venues (like aerospace engineering journals) to socialise them—bringing understudied ideas into fashion and, eventually, fruition.
But they don’t typically find a home there because:
- They’re too synthetic and vision-focused- The pieces pull on too many strands; in case of a piece on a Von Braun wheel, it may range from orbital mechanics, historical analysis, economic considerations, and policy critique. Academic journals are siloed by discipline. Where would one even submit this? Aerospace engineering? Materials science? Space policy? None fit perfectly and it’s not detailed enough in any one specific sub-field.
- They’re still too speculative (in an academic sense) about what should be built. Academic papers describe what was studied/tested, not “here’s what we should pursue and why.” The normative framing (“we need to build this”) doesn’t fit the academic style.
- The written explanations about frontier ideas might be too accessible. Academic papers have a specific register written for specialist peer reviewers, not broad technical audiences.
- They’re advocating for a research direction whereas academia wants “here’s what we discovered” not “here’s why this entire field should pivot to a new approach to space structures.” That’s seen as advocacy, not research.
- They lack the performative markers by lacking hypothesis testing, experimental results, or statistical analysis. It’s design work and technical synthesis, which doesn’t fit standard paper templates. More fundamentally, frontier ideas are about design and synthesis—proposing how we should approach building something—rather than discovery and validation which academic publishing is structured around.
So, even if frontier ideas advance knowledge and could guide real engineering work, they’re structurally incompatible with academic publishing. Academic publishing would demand a narrow technical study (e.g., “Tensile properties of Vectran at various temperatures”) or wait until there are results to report on the thing that’s been engineered or built. These aren’t fundamentally advancing the frontier. but rather validating a proposed advancement.
On Socialising Frontier Ideas
For these reasons, frontier ideas are often found in research proposals, which have a very narrow audience—a few reviewers at a funding body and maybe a fistful of readers prior to its submission itself. The research-y nature of these ideas also makes them poorly aligned for venture-funding because they require sustained R&D programs with uncertain timelines before there’s anything to commercialise—VCs fund execution risk on validated ideas, not multi-year materials science programs. So, they’re unlikely to get socialised through private investment in most cases—this is less true of software as the various frontier AI labs have demonstrated.
All said, these ideas must exist in some kind of social place—digital and physical—so that they don’t die prematurely. A true marketplace for such ideas would help refining them alongside their dissemination. There is nothing new in these words. It’s not a new concept.
I found the recent Stripe Press pop-up to be a good example of operationalising one such marketplace for socialising ideas. Stripe Press are well known for bringing out-of-print books into circulation as well as commissioning new books from emerging writers.
But the germs for a book (or a venture or a research project) live elsewhere. These germs—in some cases, they are “frontier ideas”—are being fleshed out in their Works in Progress magazine; this is their digital marketplace whereas the pop-up provides a physical extension. This ecosystem attempts to create, maybe unwittingly, a gravitational well for like-minded people thinking about (but also wanting to work on) frontier ideas. I found this pretty cool—and those who know me well enough, know that I’m slow to admit I like most things.
The more I reflected on the pop-up and also my experience drafting an article for WiP, and contrasting that process with writing scientific papers, I have come to feel that their ecosystem embodies everything that’s missing from the academic publishing ecosystem. Even if a frontier idea were to unwittingly make it into an academic venue, it would not really bring this idea into a competitive marketplace for socialising alongside other ideas (in most cases1). This might also help create cross-disciplinary thinking.
There are many issues with the academic publishing system. A well-publicised one is the excessive volume of articles being published to satisfy Goodhartian metrics2 but there is also the impact this has on the careers of academics where new projects come to life through such publications. Most funders look at publication metrics as do the universities employing researchers—more papers may not translate to more money but fewer papers guarantees you’re out of the game.
But even if one is successful at a grant, the ideas are not wholly new in concept; they often link back to an idea from an earlier work. So if one has a lot of papers but wishes to change tracks, that is unlikely or at least not easy—which means researchers with frontier ideas outside their publication track may simply abandon them rather than risk their careers.
Alternatives for Academics to Socialise Frontier Ideas
This suggests an alternative pathway for socialising frontier ideas, especially for academics. I can easily see the scientific components of an idea being socialised via personal blogs and JupyterBooks—frontier AI labs have been doing this for quite some time—but the reporting of findings could be presented in an accessible manner via one of the many emerging WiP-style magazines. If this is true, then as the volumes of articles submitted to WiP and other magazines grows, more discipline-specific magazines under their umbrella names will need to emerge.
Of course, if WiP-style magazines proliferate without maintaining editorial standards, they risk replicating the very problems they’re meant to solve. The advantage of editorial review over peer review isn’t inherent to the format—it depends on editors remaining selective and focused on intellectual merit rather than volume. So, whether this model can scale while preserving quality remains an open question. In scientific writing, it is easy to find evidence of the steep drops in quality of writing as the quantity of publication generated annually by an author rise.
The editorial review process at WiP does much of what peer review claims to do—clarifying thinking and identifying important technical questions—without the gatekeeping. Meanwhile, the verification function of peer review could be better achieved through scientific blogging (eg JupyterBooks, personal blogs, GitHub repos) where people’s data and code is made public. Those who really want to dive deeper, can do so. Academic journals are attempting this with supplementary materials and data repositories, but the execution remains clunky compared to native open-source solutions designed for code sharing and reproducibility.
Of course, the problem of satisfying the Goodhartian metrics for employers will persist. But I think this can be remediated by deprioritising paper counts and h-indices for engagement metrics with code and blogs, as one example. Google Analytics can track genuine intellectual engagement (time on page, return visits, referral sources) while GitHub metrics (stars, forks, commits, issues) reveal whether work is actually being built upon—both more meaningful signals than citation counts that often reflect networking more than impact.
Of course, these metrics can also be gamed, but they’re harder to fake than citation rings, and more importantly, they measure different things—actual use rather than academic credentialing. For the scientist and for overall progress, this represents a smaller cost than the current system’s self-referential citation metrics because more time will be spent on getting the work actually done—which would be evident from activity in the software and data repositories—than on crafting documents for specific journals that are longer than they need to be.
The transition to alternative systems is starting to be incentivised by new funding sources. For example, the Astera Institute commands a research budget comparable to some national research councils and provides substantial support for residents in their programs. Unlike traditional funders, they actively encourage their residents to find other avenues of dissemination than journals; proposals are also evaluated using criteria that de-emphasise publication metrics and disciplinary silos. In the UK, ARIA (Advanced Research and Invention Agency) has funded individuals and organizations without requiring conventional academic credentials or publication records.
These institutions aren’t uniformly better than traditional funders, but they create competitive pressure and demonstrate that alternative evaluation systems are viable. As researchers successfully funded through these channels produce results, it may gradually shift what other funders consider credible indicators of research quality.
-
It is uncommon in my field to have a Science, Nature, or Cell paper; these are outlier venues that can and do create a stir. And, of course, there are the AI conferences which are also out of domain but definitely help socialise ideas. ↩
-
I suspect it is one that new magazines (like WiP) would love to have because an inbound cadence is harder for a new magazine or journal but essential to growth. ↩