AI Coding Tools Blur Lines Between Developer and Author as Non-Coders Ship Complex Apps

A developer who 'can't write code' ships a native Android terminal IDE using Claude and Codex, while legal questions mount over who owns AI-generated software

edit
By LineZotpaper
Published
Read Time3 min
Sources10 outlets
A self-described non-programmer has released a fully native Android terminal IDE — built entirely by directing AI coding assistants Claude Code and OpenAI's Codex — raising both technical eyebrows and legal questions about authorship, ownership, and what it means to be a software developer in 2026.

An App Built Without Writing Code

Ryo Itabashi, who openly states he cannot write code, announced the release of Shelly v0.1.0 this week: a native Android terminal IDE that bundles bash, Node.js, Python 3, git, curl, ripgrep, jq, tmux, vim, and sqlite3 directly inside a single APK — no Termux required.

The project began with a simpler architecture relying on a WebView terminal and Termux, but repeated breakages prompted Itabashi to direct his AI tools to scrap that approach entirely. The replacement uses a native pseudo-terminal (PTY) via JNI's forkpty, running in the same process as the shell with no TCP sockets or inter-process communication. To bypass Android's SELinux restrictions on executing files in app data directories, Shelly invokes bundled binaries through /system/bin/linker64.

Itabashi claims the result is "the only React Native app in the world with an embedded native terminal emulator running in-process via JNI" — a technically sophisticated claim that, if accurate, represents a meaningful engineering achievement regardless of how the code was produced.

The app supports up to four simultaneous live panes, multi-agent routing across Claude, Gemini, Cerebras, Groq, Perplexity, Codex, and local LLMs via llama.cpp, and includes features like per-hunk inline diff accept/reject and cross-pane AI context sharing.

The Ownership Question

Itabashi's project arrives alongside a quietly circulating legal analysis published on Substack and shared on developer forum Lobsters this week, posing a direct question: Who owns the code Claude wrote?

The question is not academic. In most jurisdictions, copyright requires human authorship. The US Copyright Office has consistently declined to register works produced autonomously by AI, though it has allowed registration where human creative choices are sufficiently documented. When an AI system generates thousands of lines of functional code in response to natural-language prompts, the degree to which the directing human qualifies as the author remains legally unsettled.

For individual developers like Itabashi, the practical stakes may feel distant — he is releasing Shelly on GitHub. But for companies shipping commercial software built substantially with AI assistance, questions of ownership, liability for bugs or security flaws, and the enforceability of software licences attached to AI-generated code are becoming increasingly pressing.

Known Limitations

Shelly is not without rough edges. Itabashi acknowledges that Claude Code's first-run OAuth flow requires a credential transplant workaround, a port-monitoring feature is blocked by SELinux on Android 10 and later, and test coverage is described as thin — a common concern when AI-generated codebases lack the iterative human review that typically builds test suites.

Nonetheless, the project illustrates a rapidly shifting landscape: the barrier to shipping technically complex software is falling sharply, even as the legal and quality-assurance frameworks surrounding AI-generated code struggle to keep pace.

§

Analysis

Why This Matters

  • The combination of non-programmer-built production software and unresolved IP ownership creates compounding risk: companies and individuals may be shipping code they do not legally own, with limited ability to audit its correctness.
  • If courts or regulators determine that AI-generated code lacks copyright protection, it could undermine the commercial viability of AI-assisted development tools — or conversely, open-source the output of billions of dollars in AI investment by default.
  • The technical achievement here — native PTY via JNI on Android without Termux — demonstrates that AI coding tools are now capable of navigating genuinely complex systems-level constraints, not just boilerplate.

Background

The question of AI and intellectual property has been building since at least 2022, when the US Copyright Office began fielding requests to register AI-generated art and text. Its position — that copyright protects human expression, not machine output — has remained consistent, but has not been tested extensively in court for software specifically.

On the tooling side, AI coding assistants have evolved rapidly. GitHub Copilot launched in 2021 as an autocomplete tool; by 2025, agentic systems like Claude Code and OpenAI's Codex could autonomously plan, write, test, and debug multi-file projects over extended sessions. The Shelly project sits at the frontier of this shift: the human directed strategy and evaluated results, while the AI handled implementation entirely.

Termux, the Android terminal emulator that Shelly's architecture was originally built around, has long been the go-to environment for developers wanting a Linux-like shell on Android. Its limitations — particularly around running Node.js-based tools reliably across updates — have been a persistent frustration in the mobile development community, making a Termux-free alternative genuinely appealing.

Key Perspectives

AI-assisted developers: Tools like Claude Code lower the barrier to entry for complex software projects, democratising development and allowing domain experts who are not programmers to build tools for their own needs. The output, they argue, reflects the human's vision and direction.

Legal scholars and IP attorneys: The authorship question is unresolved and the stakes are growing. Without clear human creative contribution that can be documented, AI-generated code may not qualify for copyright protection in major jurisdictions, leaving commercial software in a legally ambiguous state.

Critics and security researchers: Thin test coverage and AI-generated systems code — particularly code that bypasses OS security mechanisms like SELinux — raises legitimate concerns about undiscovered vulnerabilities. When no human fully understands the implementation, auditing for security flaws becomes significantly harder.

What to Watch

  • US Copyright Office guidance or litigation specifically addressing AI-generated software code, which would clarify ownership for commercial products.
  • Adoption or rejection of Shelly by the developer community as a signal of whether AI-built tools can earn trust without traditional human-authored code review.
  • Android security researchers examining whether the /system/bin/linker64 SELinux bypass technique used in Shelly poses broader risks or inspires copycat approaches in less benign applications.

Sources

newspaper

Zotpaper

Articles published under the Zotpaper byline are synthesized from multiple source publications by our AI editor and reviewed by our editorial process. Each story combines reporting from credible outlets to give readers a balanced, comprehensive view.