Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
8f579f2
Add some debug info in security_barrier_camera_demo.
yangwang201911 Jan 24, 2022
fef9a0a
Update.
yangwang201911 Mar 7, 2022
d101b67
Update the debug msg and Add result paser for security_barrier_camera…
yangwang201911 Mar 10, 2022
7d5ff26
Add correctness checker interface for demo.
yangwang201911 Mar 10, 2022
3551ada
Implement parser and correcteness checker for security_barrier_camera…
yangwang201911 Mar 14, 2022
3fb15e0
Add correctness checker script and instantiate checker of demo securi…
yangwang201911 Mar 16, 2022
6fb10ef
. exit when task list is empty and inputs source is image instead of …
yangwang201911 Mar 22, 2022
7eca858
Exit worker thread when the inferences of all frames have been comple…
yangwang201911 Mar 23, 2022
a2f8421
Update parameters of demo security_barrier_camera_demo so that just i…
yangwang201911 Mar 23, 2022
b0845bc
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Mar 23, 2022
cc86f1d
Add the comment of replacing model for the demo security_barrier_cam…
yangwang201911 Mar 23, 2022
37f8371
Input single image for demo security_barrier_camera_demo.
yangwang201911 Mar 23, 2022
237dcf3
Update.
yangwang201911 Mar 23, 2022
d0e831b
Update.
yangwang201911 Mar 23, 2022
6b9ce5a
Update.
yangwang201911 Mar 25, 2022
85327ef
Decouple of the raw data saving from the run_tests.py.
yangwang201911 Mar 25, 2022
ab9d19c
Update.
yangwang201911 Mar 28, 2022
f1c5c87
Add scope 'correctness' to enable correctness checking.
yangwang201911 Mar 31, 2022
b4d110b
Remove the log save for each demo and update the correctness checker.
yangwang201911 Apr 1, 2022
dbe9f13
Update.
yangwang201911 Apr 1, 2022
065ca07
Update format and remove some redundant code.
yangwang201911 Apr 1, 2022
3755135
Update.
yangwang201911 Apr 1, 2022
603c120
Revert the common thread.
yangwang201911 Apr 2, 2022
2e2ade4
Update.
yangwang201911 Apr 6, 2022
3596cd2
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Apr 6, 2022
a3e464a
Update correctness checker as the common measure for all demos.
yangwang201911 Apr 7, 2022
8631f93
1. Fix the issue that demo lost the inference of the last frame when …
yangwang201911 Apr 8, 2022
91baf7d
Updata correctness checker and revert inputing images hanlder for se…
yangwang201911 Apr 11, 2022
2fe3933
1. Update correctness checker to support the multi models inputting. …
yangwang201911 Apr 12, 2022
128ec7e
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Apr 12, 2022
0914845
Update exit code when correctness checking falied.
yangwang201911 Apr 13, 2022
796e315
Modify the input dataset path when updating option '-i' for demo.
yangwang201911 Apr 15, 2022
f33983d
Update correctness checker.
yangwang201911 Apr 15, 2022
7c47ce7
Correct the output layer order of the attributes model for the securi…
yangwang201911 Apr 24, 2022
2f066d1
1. Stop reborning if images frame ID is invalid. 2. clone image frame…
yangwang201911 Apr 26, 2022
1562b9b
Update correctness checking logic.
yangwang201911 Apr 28, 2022
d2d37e3
1. fix the bug in the security demo that lost the results of the infe…
yangwang201911 May 5, 2022
65fe33b
1. Throw the exception when parsing raw data failed. 2. Correct the v…
yangwang201911 May 6, 2022
1f44622
Add logic to check if the size of vehicle attributs is correct.
yangwang201911 May 7, 2022
0ff6d8c
Update correctness checking.
yangwang201911 May 9, 2022
3958ea4
Fix the hang issue when inputting images folder.
yangwang201911 May 23, 2022
42171be
Update correctness checking logic to handle the exception.
yangwang201911 May 25, 2022
ba629d0
Update.
yangwang201911 May 25, 2022
6c43933
Fix hange issue when inputting images folder.
yangwang201911 May 26, 2022
c10b3ce
Merge branch 'master' of https://github.com/openvinotoolkit/open_mode…
yangwang201911 Jun 24, 2022
5ccace4
Fix the run_tests.py terminated with exception when timeout occurs.
yangwang201911 Jun 28, 2022
a5d84dc
Update.
yangwang201911 Aug 15, 2022
77968e6
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Aug 15, 2022
7c57fd2
update.
yangwang201911 Aug 16, 2022
f7d7a1d
Update.
yangwang201911 Sep 13, 2022
bb8e615
Merge branch 'ywang2/fix_run_tests_terminated_with_exception_when_tim…
yangwang201911 Sep 13, 2022
19f9ff1
Update.
yangwang201911 Sep 13, 2022
8739a29
Merge branch 'ywang2/fix_run_tests_terminated_with_exception_when_tim…
yangwang201911 Sep 13, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
1. Fix the issue that demo lost the inference of the last frame when …
…inputting video. 2. Update the correctness checker that allows insignificant difference for the ROI.

Signed-off-by: Wang, Yang <[email protected]>
  • Loading branch information
yangwang201911 committed Apr 8, 2022
commit 8631f933c0e5ba3aea34bbdfed36a99f92366d4e
31 changes: 25 additions & 6 deletions demos/security_barrier_camera_demo/cpp/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,8 @@ struct Context {
detectorsInfers.assign(detectorInferRequests);
attributesInfers.assign(attributesInferRequests);
platesInfers.assign(lprInferRequests);
totalInferFrameCounter = 0;
totalFrameCount = 0;
}

struct {
Expand Down Expand Up @@ -172,6 +174,11 @@ struct Context {
bool isVideo;
std::atomic<std::vector<ov::InferRequest>::size_type> freeDetectionInfersCount;
std::atomic<uint32_t> frameCounter;

// Record the inferred frames count
std::atomic<uint32_t> totalInferFrameCounter;
std::atomic<uint32_t> totalFrameCount;

InferRequestsContainer detectorsInfers, attributesInfers, platesInfers;
PerformanceMetrics metrics;
};
Expand Down Expand Up @@ -292,10 +299,6 @@ ReborningVideoFrame::~ReborningVideoFrame() {
context.videoFramesContext.lastFrameIdsMutexes[sourceID].lock();
const auto frameId = ++context.videoFramesContext.lastframeIds[sourceID];
context.videoFramesContext.lastFrameIdsMutexes[sourceID].unlock();

// Stop reborning when frame ID reached to input queue size
if (!context.isVideo && frameId >= FLAGS_n_iqs)
return;
std::shared_ptr<ReborningVideoFrame> reborn = std::make_shared<ReborningVideoFrame>(context, sourceID, frameId, frame);
worker->push(std::make_shared<Reader>(reborn));
} catch (const std::bad_weak_ptr&) {}
Expand Down Expand Up @@ -384,7 +387,7 @@ void Drawer::process() {
if (!context.isVideo) {
try {
// Exit only when inferences on all of frames are finished.
if (context.frameCounter >= FLAGS_n_iqs * context.readersContext.inputChannels.size())
if (context.totalInferFrameCounter >= FLAGS_ni * context.totalFrameCount)
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
catch (const std::bad_weak_ptr&) {}
Expand Down Expand Up @@ -569,6 +572,8 @@ void DetectionsProcessor::process() {
tryPush(context.detectionsProcessorsContext.detectionsProcessorsWorker,
std::make_shared<DetectionsProcessor>(sharedVideoFrame, std::move(classifiersAggregator), std::move(vehicleRects), std::move(plateRects)));
}
// Count the frames passed inference
context.totalInferFrameCounter++;
}

bool InferTask::isReady() {
Expand All @@ -591,10 +596,10 @@ void InferTask::process() {
InferRequestsContainer& detectorsInfers = context.detectorsInfers;
std::reference_wrapper<ov::InferRequest> inferRequest = detectorsInfers.inferRequests.container.back();
detectorsInfers.inferRequests.container.pop_back();

detectorsInfers.inferRequests.mutex.unlock();

context.inferTasksContext.detector.setImage(inferRequest, sharedVideoFrame->frame);

inferRequest.get().set_callback(
std::bind(
[](VideoFrame::Ptr sharedVideoFrame,
Expand Down Expand Up @@ -635,6 +640,12 @@ void Reader::process() {
context.readersContext.lastCapturedFrameIds[sourceID]++;
context.readersContext.lastCapturedFrameIdsMutexes[sourceID].unlock();
try {
if (context.totalInferFrameCounter < FLAGS_ni * context.totalFrameCount)
{
// Rebron this invalid frame to end the worker at next time
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->push(std::make_shared<Reader>(sharedVideoFrame));
return;
}
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
} catch (const std::bad_weak_ptr&) {}
}
Expand Down Expand Up @@ -674,6 +685,8 @@ int main(int argc, char* argv[]) {
videoCapturSourcess.push_back(std::make_shared<VideoCaptureSource>(videoCapture, FLAGS_loop_video));
}
}

uint32_t totalFrameCount = 0;
for (const std::string& file : files) {
cv::Mat frame = cv::imread(file, cv::IMREAD_COLOR);
if (frame.empty()) {
Expand All @@ -683,8 +696,12 @@ int main(int argc, char* argv[]) {
return 1;
}
videoCapturSourcess.push_back(std::make_shared<VideoCaptureSource>(videoCapture, FLAGS_loop_video));
// Get the total frame count from this video
totalFrameCount = static_cast<uint32_t>(videoCapture.get(cv::CAP_PROP_FRAME_COUNT));
} else {
imageSourcess.push_back(std::make_shared<ImageSource>(frame, true));
// Get the total frame count from the inputting images
totalFrameCount++;
}
}
uint32_t channelsNum = 0 == FLAGS_ni ? videoCapturSourcess.size() + imageSourcess.size() : FLAGS_ni;
Expand Down Expand Up @@ -802,6 +819,8 @@ int main(int argc, char* argv[]) {
nireq,
isVideo,
nclassifiersireq, nrecognizersireq};
// initilize the inputting frames count
context.totalFrameCount = totalFrameCount;
// Create a worker after a context because the context has only weak_ptr<Worker>, but the worker is going to
// indirectly store ReborningVideoFrames which have a reference to the context. So there won't be a situation
// when the context is destroyed and the worker still lives with its ReborningVideoFrames referring to the
Expand Down
66 changes: 56 additions & 10 deletions demos/tests/correctness_cases.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,21 +29,52 @@ def __init__(self, demo):
def __call__(self, output, test_case, device, execution_time=-1):
pass

def compare_roi(self, source_roi, dest_roi):
source = []
dest = []
if len(source_roi) != len(dest_roi):
print (source)
print (dest)
return False
for item in source_roi:
source.append(float(item))

for item in dest_roi:
dest.append(float(item))
flag = True
prob_gap = 0.01
pos_gap = 5
for index in range(len(source)):
if index <= 1:
if abs(source[index] - dest[index]) > prob_gap:
flag = False
print (source)
print (dest)
break
else:
if abs(source[index] - dest[index]) > pos_gap:
flag = False
print (source)
print (dest)
break
return flag

def check_difference(self):
flag = True
devices_list = {"AUTO:GPU,CPU" : ["CPU", "GPU"],
devices_list = {
#"AUTO:GPU,CPU" : ["CPU", "GPU"],
"AUTO:CPU" : ["CPU"],
"AUTO:GPU" : ["GPU"],
"MULTI:GPU,CPU" : ["CPU", "GPU"]}
#"AUTO:GPU" : ["GPU"],
"MULTI:CPU,GPU" : ["CPU", "GPU"]}
err_msg = ''
multi_correctness = {'CPU': True, 'GPU': True}
for device in devices_list:
for target in devices_list[device]:
if device not in self.results or target not in self.results:
flag = False
err_msg += "\tMiss the results of device {} or device {}.\n".format(device, target)
flag = False
err_msg += "\tMiss the results of device {} or device {}.\n".format(device, target)
if device in self.results and target in self.results:
if self.results[device] != self.results[target]:
flag = False
err_msg += "\tInconsistent results between device {} and {} \n".format(device, target)
# Show the detailed inconsistent results
for case in self.results[target]:
Expand All @@ -59,10 +90,25 @@ def check_difference(self):
for frame in self.results[target][case][channel]:
if channel not in self.results[device][case] or (channel in self.results[device][case] and frame not in self.results[device][case][channel]):
err_msg += ("\t\t\t[Not Found on {}]Channel {} - Frame {} : {}\n".format(device, channel, frame, self.results[target][case][channel][frame]))
elif self.results[device][case][channel][frame] != self.results[target][case][channel][frame]:
err_msg += ("\t\t\tInconsist result:\n\t\t\t\t[{}] Channel {} - Frame {} : {}\n".format(target, channel, frame, self.results[target][case][channel][frame]))
err_msg += ("\t\t\t\t[{}] Channel {} - Frame {} : {}\n".format(device, channel, frame, self.results[device][case][channel][frame]))
else:
for obj in self.results[target][case][channel][frame]:
if obj not in self.results[device][case][channel][frame]:
flag = False
err_msg += ("\t\t\t[Not Found on {}]Channel {} - Frame {} : {}\n".format(device, channel, frame, self.results[target][case][channel][frame]))
elif not self.compare_roi(self.results[device][case][channel][frame][obj],self.results[target][case][channel][frame][obj]):
if device != 'MULTI:CPU,GPU':
flag = False
else:
multi_correctness[target] = False
err_msg += ("\t\t\tInconsist result:\n\t\t\t\t[{}] Channel {} - Frame {} : {}\n".format(target, channel, frame, self.results[target][case][channel][frame]))
err_msg += ("\t\t\t\t[{}] Channel {} - Frame {} : {}\n".format(device, channel, frame, self.results[device][case][channel][frame]))
err_msg += ('\t\t---------------------------------------------------------\n')
# Check correctness for MULTI device
for device in devices_list:
if 'MULTI:' not in device:
continue
if multi_correctness['CPU'] == False and multi_correctness['GPU'] == False:
flag = False
if not flag:
print("Correctness checking: Failure\n{}".format(err_msg))
return flag
Expand Down Expand Up @@ -137,6 +183,6 @@ def __call__(self, output, test_case, device, execution_time=0):

DEMOS = [
deepcopy(BASE['security_barrier_camera_demo/cpp'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you rebase on the master please

Suggested change
deepcopy(BASE['security_barrier_camera_demo/cpp'])
BASE['security_barrier_camera_demo/cpp']

.update_option({'-r': None, '-n_iqs': '1'})
.update_option({'-r': None,'-ni': '16', '-n_iqs': '1', '-i': '/home/wy/data_for_security_barrier_camera_demo/images_10/output.mp4'})
.add_parser(DemoSecurityBarrierCamera)
]