This workstream, as it currently stands in our tripartite creative-thought generation structure of workstream to project to investor vehicle, is based on real-life and ethnographic & auto-ethnographic experiences, the details of which no longer need to be aired.
What’s relevant is that a relationship between humans and machines is being shaped, along the lines of our Better Biz Me and Crime Hunch AI and similar tech, designed to make humans more professionally and societally significant, not less.
Below, equally, machines support intuitive thinking, high-level domain expertise, and efficient “thinking without thinking” into being and existence, but in the context of both mental health (sourced in, and therefore internal to, the patient) alongside spaces of mental distress (sourced in what therefore must be seen as the environment of the victim, and so just as clearly external to).
Inverse paranoia is a term which is attributed to W Clement Stone. It basically involves believing the world is conspiring in one’s favour, not against. It’s a simple flipping of frame which a Guardian video from way-back-then first opened my eyes to in a formal way:
In the case of inverse paranoia, we perform the same open-minded act of generosity to the world’s inherent redundancy of meaning by using a kind of Sherlock Holmes way of thinking to keep all possible connections in the same plane for as long as possible.
I have in fact practised it all my life, without realising it. It has damaged my professional reputation on occasions, but, as F Scott Fitzgerald once pointed out (I am paraphrasing without wishing to sound boastful), the sign of a higher intelligence is that which is capable of functioning even when in the presence of two contrary ideas.
Anyways. Here we have it.
The Motogotchi Inverse Paranoia Platform. Something we could easily deliver in the frame and setting of Platform Genesis’s open code approach to our sort of repurposed AI and similar tech. We use the term “Motogotchi” because it evokes – for us at least – the combining of phone and gently animalistic buddy with very human yearnings to be given the opportunity to show just how capable we can become. And all in one seamless device, experience, and set of hugely supportive outcomes for all entities involved.
We initially propose, therefore, that the following ideas be used for two fundamental contexts and fields:
- supporting mental health & mental distress on the one hand;
- supporting a wider, more pleasurable, and more consumer-based gamification on the other.
Two types of user
- Initial driver of the UI: UI-subject
- Initial responder to the UI: UI-object
Three levels of gamification, all remote
- Human 70% – machine/AI-informing 30%
- Machine/AI 70% – human-supported 30%
- Machine/AI only for mass-produced apps / Human – machine-supported in various
degrees for highly personal communication
Levels divide up as following examples (other cases may enter each level):
1. Hugely sensitive mental wellbeing cases
Good for …
- reducing the cost of CBT and other talking philosophies, and so allowing much wider implementation, efficiency and ongoing development through the collection of contrastable quantitative and qualitative data over statistically significant periods of time;
- embedding self-learning, and therefore empowerment of what to date have often patronisingly – and highly hierarchically – been labelled patients (at worst) or service users (at limiting best);
- permitting communication-challenged individuals with lack of self-confidence the opportunity to trial new ways of behaving, reacting and relating in a safely vulnerable environment.
More importantly, before such programmes are put into practice, full integration of potential users acting as properly integrated (ie paid) volunteers should be engineered to encourage systemic commitment to the process on peer-to-peer terms between conceptualisers, implementers, and those who will be end-users of any such systems.
2. Games environments with concrete focusses such as dating, training, social skills generally, self- and life-coaching, etc.
- Self-awareness leading to self-knowledge
- Self-assessment leading to quantifiable self-improvement
- Quantifiable social and work achievements as a result of all the above
Good for …
- easy-to-define and easy-to-negotiate learning needs.
3. In their more open-ended sense/more human perhaps
a. The briefly alluded to Motogotchi example of a phone buddy/tech human:
- Much more qualitative results and impact than the previous case area
- Relative intangibles (quality of life) over other more tangibles
- Significant improvements over easily measured gains
Good for …
- lonely people of all ages.
b. Developing the ability to create new types of language, and thus via human and hybrid interactions progress communication types already being achieved by AI and machine-learning programmes.
Good for …
- blurring the lines between machine and human thought, allowing both to grow in dialogue and partnership.
As we might all begin to sense, then, repurposing trad approaches to AI and similar tech could bring huge advantages to our very spontaneous capacity to think intuitively.
Instead of driving AI to replicate and substitute humanity from the frame of reality and all our shared future-presents, why not demonstrate that humans can not only be enabled into capturing such thought and evidencing it better, but also become expanded, enhanced and upskilled in such thinking, using machines not to frame us out of the right to use our essential humanness but actually, positively, demonstrate we are not the fixed goalposts of AI researchers since forever!