# Roadmap to V1

Below you'll find the main features/functions I am keen to add to Mimic, but really it's user feedback and feature requests that will guide what happens next.

As a tool for me, these next steps are the ones I want...

***

## ~~0.1 - Done!~~

It's taken a few months but 0.1.0 has been all about getting from proof-of-concept to something genuinely useful in a real show environment. I've used it on a few gigs and have even found myself reprogramming little extra bits in my controllers mid-show.

It's a little feature thin, and the UI could definitely do with some *tlc*, but the workflow is there and the features that are in, **work.**

***

## 0.2&#x20;

0.2 will all about bug finding and ironing out quirks

***

## 0.3

* Save and Load - Named Save Files for different shows
* Export Macros - set yourself up something specific and drop it into other sessions.
* Additional micro-features&#x20;

***

## 0.4

* Multi-Action Cues
* Cue Sequence logic... maybe.
* User-customisation for Feedback and device LUTs

***

## 0.5

* OSC Output
* ArtNet Output

***

## ...1.0?

Seems daft to plan too far ahead without knowing what the community values most in a tool like this!&#x20;

Alpha testers get access to a Discord server to request features directly.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://midi-mimic.gitbook.io/midi-mimic/roadmap-to-v1.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
