We disregard the industry tradition of normalizing data and refactoring code in order to avoid duplication. We instead look to life and culture for creative inspiration.
We embrace the Creative Commons in code where everything we do in public is automatically with attribution and share alike. We have workflows that coexist with other source licences and accept that additional friction.
We have been compared to GitHub which is similarly inclined but the two are distinguished in one important way. Wiki is a document editor, not a source code manager, and thus knows exactly what edits took place and saves them like any other editor for undo and redo.
Wiki can add, delete, move or edit any item on a page. The historical sequence of item actions, along with page actions create and fork, is sufficient to recreate any revision along the path to the present page.
We call the history of editor actions the journal. It travels with the other two components of the page, the title and the current list of items called the story.
All items within the story, often paragraphs, but sometimes images or datasets or visualizations, are identified by a probabilistically unique identifier that stays with the item throughout its history of movement and editing.
Wiki favors small sites with original content speaking to a specific audience. Common practice supplements this with reference material forked into a site to make the site whole. When someone claims another's words as their own, they are under some obligation to align these words with their own thought. But such alignment does not count as new content from the identification perspective.
309115 Original Story (items) 1139 Wiki Server (sites) RATIO Average (items / sites)
We find, using numbers from July 2016, that the average number of newly created and identified story items to be in the small hundreds per site. This is consistent with using sites as one might use documents in a traditional desktop.
We expect a site to have one owner but an owner to have many sites. An owner authoring in one site is as likely to fork content from their other sites as any belonging to a different author.
Wiki has many affordances to aid browsing from site to site while writing in one origin site that is owned. For example, a page forked from another site will resolve links back to that site for pages that are not present on the current site.
See Collaborative Link for how this happens.
Wiki's side by side viewing of related pages, including one time identical pages, includes highlighting and scroll alignment of page with items and actions identified as of a common origin.
Pages forked from one site to another retain the same name and are hence forth referred to as twins. When browsing multiple sites, should a page viewed have twins in that scope, then the page is annotated with newer, older and same aged pages called out in the page headline.
Search looks through all of the pages for sites visited within one browsing tab. Search results quotes how many pages have become known throughout one browsing session. Foraging strategies differ on how many sites should be present in the search.
See Neighborhoods for emergent scope management.
Protocols exist for exporting whole sites and then importing page by page from export copies. Whole site copies are useful for backup and search engines but experience shows live management of sites while browsing a suitably better experience than hoarding copies that have not been read.