Yes, but what I’m looking to do is a live playback recording while looping. I need to recieve the audio output in another application that can record as I mix/ play live
@number37 said:
Unfortunately that just isn’t possible with NS2 at this time. You would need to capture the audio output into another device in order to do it live.
I'm going to add Audiobus support in a near-future update as it's been a popular request.
Question is: Can I get away with just output from NS2 to Audiobus, or is there a valid case for input (ie. Audiobus to NS2) too? Output's pretty easy but input would need more UI work to select the input source. Only current use for input I can see at the moment is so you can use NS2's sample editor to record incoming audio, which doesn't seem massively useful.
Input would be great to have, even though harder to do. I would like to be able to get sounds from my other stuff into NS2 without having to resort to record/export/import. I also think it would avoid newbie questions. It will be more important when audio tracks are ready, so maybe could wait until then.
Anyway, super cool that you’re doing it, at whatever level it ends up at.
P.S. Link Start/Stop might be good to consider lumping in with that, since AB supports it for global transport control, I think.
@Blip Interactive said:
I'm going to add Audiobus support in a near-future update as it's been a popular request.
Question is: Can I get away with just output from NS2 to Audiobus, or is there a valid case for input (ie. Audiobus to NS2) too? Output's pretty easy but input would need more UI work to select the input source. Only current use for input I can see at the moment is so you can use NS2's sample editor to record incoming audio, which doesn't seem massively useful.
Is there much chance that an AudioBus input for sampling would make NS2 less stable? or would it take an excessive amount of time from the development of other features? If it would in any way, don’t bother and concentrate on making sample editing, arranging and saving the best it can be - we can always sample stuff elsewhere and then chop it up and make instruments with it in NS2. Just one opinion of course.
@Blip Interactive said:
I'm going to add Audiobus support in a near-future update as it's been a popular request.
Question is: Can I get away with just output from NS2 to Audiobus, or is there a valid case for input (ie. Audiobus to NS2) too? Output's pretty easy but input would need more UI work to select the input source. Only current use for input I can see at the moment is so you can use NS2's sample editor to record incoming audio, which doesn't seem massively useful.
Really appreciate your work on this! I was thinking about this flow (let me know if this is overthinking or not do-able).
1) in NS2 use the External MIDI Instrument too send MIDI to non AUv3 apps (eg NS2 -> iWavestation)
3) Record in NS2 - the Wavestation audio is printed to audio track in NS2, for mixing with other tracks
To replicate this without Audio Input on NS2, I think I’d need a 3rd app (Auria) to take the outputs from the Wavestation and NS2. My goal here is to cut out that 3rd app while still being able to use the non AUv3 apps that I like (I don’t see Korg making the iMS-20 or iWavestation into AUv3 any time soon, but I could be wrong). Hopefully that makes some sense.
@Blip Interactive said:
Yep, I can definitely see this being useful when audio tracks arrive.
By surely by that time all other modern apps should be AUv3 rather than the tangled mess you suggest
LOL - yeah...I’m not holding my breath for that to happen. But my tangled mess isn’t that bad - I’ve seen some crazy implementations being discussed too (apeMatrix and AUM and Audiobus and MIDIFire and . . . wait, what key are we in??)
@Blip Interactive said:
Yep, I can definitely see this being useful when audio tracks arrive.
By surely by that time all other modern apps should be AUv3 rather than the tangled mess you suggest
LOL - yeah...I’m not holding my breath for that to happen. But my tangled mess isn’t that bad - I’ve seen some crazy implementations being discussed too (apeMatrix and AUM and Audiobus and MIDIFire and . . . wait, what key are we in??)
I used to spent days making complex AUM, AudioBus, ApeMatrix etc, so called live set ups. I eventually came to the conclusion that I was just trying to be clever and it really didn’t help my sanity. I used to dread any updates to any of the apps in my jenga like setup
I used to spent days making complex AUM, AudioBus, ApeMatrix etc, so called live set ups. I eventually came to the conclusion that I was just trying to be clever and it really didn’t help my sanity. I used to dread any updates to any of the apps in my jenga like setup
Boom! Exactly. It is a fun problem to solve, and definitely “scratches an itch” that a lot of us have. You know, figuring out a way to do it, just because.. I just don’t want to have to do MusixPro -> MIDIFire -> apeMatrix -> AudioBus -> blah blah -> AudioShare, then import that into a pad in Slate, just to trigger the Ski Jam loop in sync with the first 8 bars of my little ditty.
Sure, nonetheless I wouldn't dismiss it out of hand, if it would provide me with workarounds, f.e. recording au midi fx:
E.g. in Aum, with 'MidiEcho' as au midi fx routed to midifire1 port, in midifire: midifire1 -> midifire3, and Obsidian set to receive from midifire3.
@Blip Interactive said:
I'm going to add Audiobus support in a near-future update as it's been a popular request.
Question is: Can I get away with just output from NS2 to Audiobus, or is there a valid case for input (ie. Audiobus to NS2) too? Output's pretty easy but input would need more UI work to select the input source. Only current use for input I can see at the moment is so you can use NS2's sample editor to record incoming audio, which doesn't seem massively useful.
Are you planning to make as both input (the master out?) and output (a track input I guess?)
AB input to NS2 could be useful, but I see it as a matter of convenience, not necessity.
Currently:
MIDI Out to trigger IAA app is needed.
Set up is required in AudioShare (minimal effort to select an app)
AudioShare is implemented into NS2 ‘almost’ seemlessly. Importing sounds is easy.
Future:
AB: will this be implented fully to remove the need to set up MIDI out (or MIDI In inside app)?
Set up inside of AB will be required. (Select app and destination app)
AB: can’t record audio (unless I’m not up to date on some development?)
AB: Audio In to NS2 would be required. Is this going to be automatic? Will there be set up steps required here? Selecting Slate Audio Inputs and/or (eventually) Audio Tracks In/Out?
Just wondering if this is about saving steps or the ‘convenience’ (which I don’t understand) of not using apps like AudioShare. Audio will need to be recorded, whether in AudioShare or Slate, and there will be some set-up required with either AS or AB. With Audio Tracks and an AB In to record inside NS2 Audio Tracks, there will still be time outside of NS2 to set up AB.
Is the goal a ‘never leave NS2’ model? Seems to me like that is only possible with AUv3 apps.
Since I don’t understand the workflow of others, I am probably missing something. The Live Mixing situation is one example. However, if straight Audio Out is required and a NS2 track is just being played as if playing a .wav file, then I think using .wav files would be better - less to go wrong and less CPU demand for the other ‘Live’ apps. Maybe @JBAikens could elaborate on exactly what app set up he’s using to clarify this. Or maybe some of you gents can offer some insight?
I think input would be really important , especially once tracks come along. I have already found a desire to have that input a few times so count me as another vote for it. I would definitely vote for AB output as well. Seems like one of those features that a lot of people will encounter and left wishing they had it at some point.
@Blip Interactive I’m still learning NS2, but as I understand it, without Audiobus or Aum input, there’s really no streamlined way for me to record audio generated from third party instruments such as Animoog, Synth One, etc. I used to do this all the time with NS1, and I feel kinda dumb for assuming I’d still be able to do the same with NS2.
I really do hope you’ll be able to add input because, as awesome as Obsidian is, I do a ton of sound design in other apps, and being unable to record those sounds into NS2 is kind of a deal-breaker for me. I suspect AUv3 versions will eventually become ubiquitous, but I think we’re still a year or two away on that.
I agree to do minimal work toward audiobus and focuss on the sample editor. You implemented the cleanest auv3 ui of all the daws i have used so far, thats good enough. This is stable, simple and for goodness sakes people this guy spent half a decade making a clean simple workflow for us, use this and keep it simple, dont over complicate things, its not needed. And by the way, if you want to record into audioshare, you can do so with an aconnect audio4+ using one ipad with ns2 and the other with audioshare, or anything that records. Im using this then i send the file back into ns2 via airdrop ipad to ipad wireless. The iconnectaudio4+ isnt cheap but it is rocksolid stable, if the power gets cut, it reboots upon powercoming back, it allows audio and midi to flow freely, if you have a mixer and synths, controllers... i have a beatstep that is mapped to drums, a qnexus piano to keys and my ultranova to a second synth. And this is done easy in ns2, very clean ui!
I am new to music production on iOS so tell me if I am wrong anywhere here: AUM has recording (from AUM specs: Record mixes, individual tracks, or both, straight into the storage space of AudioShare) so importing audio into NS2 sampler seems straight forward. Te only problem is necessity of cropping recorded file so it aligns with the song.
@qbassa said:
I am new to music production on iOS so tell me if I am wrong anywhere here: AUM has recording (from AUM specs: Record mixes, individual tracks, or both, straight into the storage space of AudioShare) so importing audio into NS2 sampler seems straight forward. Te only problem is necessity of cropping recorded file so it aligns with the song.
That's correct. It's pretty easy to trim off the start in Nanostudio's audio editor.
Comments
Do a mixdown in NS2 and import the resulting file into your app.
Yes, but what I’m looking to do is a live playback recording while looping. I need to recieve the audio output in another application that can record as I mix/ play live
Unfortunately that just isn’t possible with NS2 at this time. You would need to capture the audio output into another device in order to do it live.
Thank you
I'm going to add Audiobus support in a near-future update as it's been a popular request.
Question is: Can I get away with just output from NS2 to Audiobus, or is there a valid case for input (ie. Audiobus to NS2) too? Output's pretty easy but input would need more UI work to select the input source. Only current use for input I can see at the moment is so you can use NS2's sample editor to record incoming audio, which doesn't seem massively useful.
Yay! You are the man!
Input would be great to have, even though harder to do. I would like to be able to get sounds from my other stuff into NS2 without having to resort to record/export/import. I also think it would avoid newbie questions. It will be more important when audio tracks are ready, so maybe could wait until then.
Anyway, super cool that you’re doing it, at whatever level it ends up at.
P.S. Link Start/Stop might be good to consider lumping in with that, since AB supports it for global transport control, I think.
Is there much chance that an AudioBus input for sampling would make NS2 less stable? or would it take an excessive amount of time from the development of other features? If it would in any way, don’t bother and concentrate on making sample editing, arranging and saving the best it can be - we can always sample stuff elsewhere and then chop it up and make instruments with it in NS2. Just one opinion of course.
Really appreciate your work on this! I was thinking about this flow (let me know if this is overthinking or not do-able).
1) in NS2 use the External MIDI Instrument too send MIDI to non AUv3 apps (eg NS2 -> iWavestation)
2) Wavestation output -> AudioBus -> NS2 Audio Input
3) Record in NS2 - the Wavestation audio is printed to audio track in NS2, for mixing with other tracks
To replicate this without Audio Input on NS2, I think I’d need a 3rd app (Auria) to take the outputs from the Wavestation and NS2. My goal here is to cut out that 3rd app while still being able to use the non AUv3 apps that I like (I don’t see Korg making the iMS-20 or iWavestation into AUv3 any time soon, but I could be wrong). Hopefully that makes some sense.
Yep, I can definitely see this being useful when audio tracks arrive.
By surely by that time all other modern apps should be AUv3 rather than the tangled mess you suggest
LOL - yeah...I’m not holding my breath for that to happen. But my tangled mess isn’t that bad - I’ve seen some crazy implementations being discussed too (apeMatrix and AUM and Audiobus and MIDIFire and . . . wait, what key are we in??)
Me too. Makes me wonder if there's time left at the end for any music to come out
I used to spent days making complex AUM, AudioBus, ApeMatrix etc, so called live set ups. I eventually came to the conclusion that I was just trying to be clever and it really didn’t help my sanity. I used to dread any updates to any of the apps in my jenga like setup
Oh, so then you’re planning to make NS2 a “modern” AUv3 app?
Seriously though, input from other hosts (which by definition can not be AUs) is just as much needed as from non-host apps.
Boom! Exactly. It is a fun problem to solve, and definitely “scratches an itch” that a lot of us have. You know, figuring out a way to do it, just because.. I just don’t want to have to do MusixPro -> MIDIFire -> apeMatrix -> AudioBus -> blah blah -> AudioShare, then import that into a pad in Slate, just to trigger the Ski Jam loop in sync with the first 8 bars of my little ditty.
Sure, nonetheless I wouldn't dismiss it out of hand, if it would provide me with workarounds, f.e. recording au midi fx:
E.g. in Aum, with 'MidiEcho' as au midi fx routed to midifire1 port, in midifire: midifire1 -> midifire3, and Obsidian set to receive from midifire3.
Are you planning to make as both input (the master out?) and output (a track input I guess?)
AB input to NS2 could be useful, but I see it as a matter of convenience, not necessity.
Currently:
MIDI Out to trigger IAA app is needed.
Set up is required in AudioShare (minimal effort to select an app)
AudioShare is implemented into NS2 ‘almost’ seemlessly. Importing sounds is easy.
Future:
AB: will this be implented fully to remove the need to set up MIDI out (or MIDI In inside app)?
Set up inside of AB will be required. (Select app and destination app)
AB: can’t record audio (unless I’m not up to date on some development?)
AB: Audio In to NS2 would be required. Is this going to be automatic? Will there be set up steps required here? Selecting Slate Audio Inputs and/or (eventually) Audio Tracks In/Out?
Just wondering if this is about saving steps or the ‘convenience’ (which I don’t understand) of not using apps like AudioShare. Audio will need to be recorded, whether in AudioShare or Slate, and there will be some set-up required with either AS or AB. With Audio Tracks and an AB In to record inside NS2 Audio Tracks, there will still be time outside of NS2 to set up AB.
Is the goal a ‘never leave NS2’ model? Seems to me like that is only possible with AUv3 apps.
Since I don’t understand the workflow of others, I am probably missing something. The Live Mixing situation is one example. However, if straight Audio Out is required and a NS2 track is just being played as if playing a .wav file, then I think using .wav files would be better - less to go wrong and less CPU demand for the other ‘Live’ apps. Maybe @JBAikens could elaborate on exactly what app set up he’s using to clarify this. Or maybe some of you gents can offer some insight?
I think input would be really important , especially once tracks come along. I have already found a desire to have that input a few times so count me as another vote for it. I would definitely vote for AB output as well. Seems like one of those features that a lot of people will encounter and left wishing they had it at some point.
@Blip Interactive I’m still learning NS2, but as I understand it, without Audiobus or Aum input, there’s really no streamlined way for me to record audio generated from third party instruments such as Animoog, Synth One, etc. I used to do this all the time with NS1, and I feel kinda dumb for assuming I’d still be able to do the same with NS2.
I really do hope you’ll be able to add input because, as awesome as Obsidian is, I do a ton of sound design in other apps, and being unable to record those sounds into NS2 is kind of a deal-breaker for me. I suspect AUv3 versions will eventually become ubiquitous, but I think we’re still a year or two away on that.
Thanks so much or considering it.
I agree to do minimal work toward audiobus and focuss on the sample editor. You implemented the cleanest auv3 ui of all the daws i have used so far, thats good enough. This is stable, simple and for goodness sakes people this guy spent half a decade making a clean simple workflow for us, use this and keep it simple, dont over complicate things, its not needed. And by the way, if you want to record into audioshare, you can do so with an aconnect audio4+ using one ipad with ns2 and the other with audioshare, or anything that records. Im using this then i send the file back into ns2 via airdrop ipad to ipad wireless. The iconnectaudio4+ isnt cheap but it is rocksolid stable, if the power gets cut, it reboots upon powercoming back, it allows audio and midi to flow freely, if you have a mixer and synths, controllers... i have a beatstep that is mapped to drums, a qnexus piano to keys and my ultranova to a second synth. And this is done easy in ns2, very clean ui!
I am new to music production on iOS so tell me if I am wrong anywhere here: AUM has recording (from AUM specs: Record mixes, individual tracks, or both, straight into the storage space of AudioShare) so importing audio into NS2 sampler seems straight forward. Te only problem is necessity of cropping recorded file so it aligns with the song.
That's correct. It's pretty easy to trim off the start in Nanostudio's audio editor.