11-11-2016, 09:17 PM
One problem with early IA would be the easy availability of useful, but inappropriate, apps. You might install a search engine in your exoself, that is popular and easy to use, but it might be unduly influenced by commercial, philosophical or political biases imposed by the designers of the app. Later in the timeline your exoself would be allowed to develop from your own mental architecture, and would reflect your own personal intellectual growth; but it is possible that the designers of any augmentation technology could have an inappropriate level of influence over the nature of the augmentation.
I suppose that would be a problem throughout OA's history. In the early timeline, augmentation technology could be crude and limited in toposophic diversity, forcing augmented minds into a number of easy-to-achieve pigeonholes, or towards certain philosophical goals; later augmentation technology could be much less limited in scope, but the influence (or ainfluence) of the Transapients would always be present in the architecture of these devices.
I suppose that would be a problem throughout OA's history. In the early timeline, augmentation technology could be crude and limited in toposophic diversity, forcing augmented minds into a number of easy-to-achieve pigeonholes, or towards certain philosophical goals; later augmentation technology could be much less limited in scope, but the influence (or ainfluence) of the Transapients would always be present in the architecture of these devices.