Update source_code/acam/loopback-capture.cpp#1
Update source_code/acam/loopback-capture.cpp#1rdp merged 1 commit intordp:masterfrom taqattack:patch-1
Conversation
By doing this, you can use the helper function to select which device to record from.
Update source_code/acam/loopback-capture.cpp
|
works for me, do you need a release with this? |
|
no thats fine. I'm gonna try to implement some stuff :) |
|
Hey roger, correct me if I'm wrong but are you're obtaining an audio buffer and then formatting it to wave buffer of 16-bit PCM audio buffer which is stored in pBufLocal which is a byte[]. I was wondering if it was possible to store/duplicate that buffer in a floating point array instead. I'm not too sure how to do that though but just a thought. |
|
On Thu, Sep 20, 2012 at 1:54 PM, taqattack notifications@github.com wrote:
I left it this way because I had some problems originally getting players |
|
Well the issue isn't really floating point. I'm just wondering why the pData only ranges from 0 to 255 if its supposed to be 16-bit. |
|
I did look into it today some and it appears the default "floating point" output from GetMixFormat is type wFormatTag==WAVE_FORMAT_EXTENSIBLE and subFormat is KSDATAFORMAT_SUBTYPE_IEEE_FLOAT But, to answer your question, pData is a "byte array" because that's how directshow's frames are--it's an array of bytes that you stuff with some "appropriate" data. In this case, we're stuffing 16-bit audio data into it at a time, so every other byte is a new sample, if that makes sense. I did try returning it KSDATAFORMAT_SUBTYPE_IEEE_FLOAT directly, as the directshow advertised stream, which resulted in FFmpeg: Could not connect pins (VFW_E_NO_ACCEPTABLE_TYPES was the response message). Even just switching it innocently to instead of immediately causes FFmpeg to reject the stream, or actually, to not be able to build the graph. My guess is that IFilterGraph::DirectConnect, which is the method that is failing, is somehow not compatible with WAVE_FORMAT_EXTENSIBLE. What made this really weird is that if I run FFmpeg within visual studio (as a debugger), then DirectConnect miraculously works fine, but running it from the command line, it doesn't work with that error message. I think http://stackoverflow.com/questions/2347562/program-crashes-only-in-release-mode-outside-debugger explains why but I didn't get as far as testing it with windbg. I do have one other idea to try to get it to work with floating point output, but I'm not sure if it'll work. Another option would be to fix FFmpeg so that it can avoid using DirectConnect, but that is a whole different task. HTH. |
|
Sorry Roger I think I may have given the wrong idea in my last post. And yes, FFmpeg isn't able to receive 32-bit floats from dshow interface. I just realized IAudioCaptureClient->GetBuffer only outputs to bytearray, which is totally fine. My concern is trying to map out numerical values for the each byte of 16-bit pcm in pData. So for example 0x00000000 would map to most negative value in float/int variable. But it's only outputting between 0 and 255. |
|
yeah I'm not sure there. My guess would be little endian based on http://wiki.multimedia.cx/index.php?title=PCM . FFmpeg also seems to prefer pcm_s16le which is little endian, so little endian is probably right. Thanks to this discussion I think I have figured out how to capture "up to 32 bit" audio from the sound card, though, see the 32bit branch. I have no idea how to test this, but it seems like a good idea somehow. If there's no complaint I'll probably merge it in about a week. |
|
32-bit does sound amazing! Too bad FLVs can only support upto 16-bit, 44khz for RTMP streaming. On a sidenote, I was finally able to understand how pData byte samples work. I'm gonna play around a bit to make the microphone loop through this filter. If you have any pointers, let me know! |
|
If I were I would just have the mic as a separate dshow input to FFmpeg, then use amerge to join the two. Sounds easier for me than writing a merging filter. Or were you referring to just having it capture "from mic" instead of stereo mix? Yeah most things only use 16-bit but it might be an improvement for somebody somewhere, and hey, it might be useful some time or other :) |
|
Yeah I could do that but FFmpeg isn't compatible with some of the microphones that don't have standard dshow interface. It seems like I need to use WASAPI to loop that through vac filter. On a sidenote: Some of the users seem to be having audio desync issues with virtual-audio-capture. Using virtual-audio-capture, some users experience an audio speedup, i.e., the audio starts to play faster than the video. It seems like it's running a bit faster than other devices. I've tested it out using Virtual Audio Cable and other microphones which seem to sync up properly with the video over long periods of time. I'm guessing it has to do with how often "Fillbuffer" is called but I'm not sure. |
|
Are the users with problems using 48000 hz? On Sun, Sep 23, 2012 at 9:55 AM, taqattack notifications@github.com wrote:
|
|
Also how are timestamps generated using your capture filter? Do users get the same problem with screen-capture-recorder? Is there a place on the forum I could/should go to try and debug this with users? |
|
Also cdd6f9e |
By doing this, you can use the helper function to select which device to record from.