init - 初始化项目

This commit is contained in:
Lee Nony
2022-05-06 01:58:53 +08:00
commit 90a5cc7cb6
6772 changed files with 2837787 additions and 0 deletions

View File

@@ -0,0 +1,71 @@
OpenCV iOS Hello {#tutorial_hello}
================
@tableofcontents
@prev_tutorial{tutorial_ios_install}
@next_tutorial{tutorial_image_manipulation}
| | |
| -: | :- |
| Original author | Charu Hans |
| Compatibility | OpenCV >= 3.0 |
Goal
----
In this tutorial we will learn how to:
- Link OpenCV framework with Xcode
- How to write simple Hello World application using OpenCV and Xcode.
Linking OpenCV iOS
------------------
Follow this step by step guide to link OpenCV to iOS.
-# Create a new XCode project.
-# Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left
hand panel and click on project name.
-# Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
-# Click on Add others and go to directory where *opencv2.framework* is located and click open
-# Now you can start writing your application.
![](images/linking_opencv_ios.png)
Hello OpenCV iOS Application
----------------------------
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
- Link your project with OpenCV as shown in previous section.
- Open the file named *NameOfProject-Prefix.pch* ( replace NameOfProject with name of your
project) and add the following lines of code.
@code{.m}
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
@endcode
![](images/header_directive.png)
- Add the following lines of code to viewDidLoad method in ViewController.m.
@code{.m}
UIAlertView * alert = [[UIAlertView alloc] initWithTitle:@"Hello!" message:@"Welcome to OpenCV" delegate:self cancelButtonTitle:@"Continue" otherButtonTitles:nil];
[alert show];
@endcode
![](images/view_did_load.png)
- You are good to run the project.
Output
------
![](images/ios_hello_output.png)
Changes for XCode5+ and iOS8+
-----------------------------
With the newer XCode and iOS versions you need to watch out for some specific details
- The *.m file in your project should be renamed to *.mm.
- You have to manually include AssetsLibrary.framework into your project, which is not done anymore by default.

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

View File

@@ -0,0 +1,128 @@
OpenCV iOS - Image Processing {#tutorial_image_manipulation}
=============================
@tableofcontents
@prev_tutorial{tutorial_hello}
@next_tutorial{tutorial_video_processing}
| | |
| -: | :- |
| Original author | Charu Hans |
| Compatibility | OpenCV >= 3.0 |
Goal
----
In this tutorial we will learn how to do basic image processing using OpenCV in iOS.
Introduction
------------
In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In
iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To
convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below
is the code needed to covert back and forth between Mat's and UIImage's.
@code{.m}
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
@endcode
@code{.m}
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
@endcode
After the processing we need to convert it back to UIImage. The code below can handle both
gray-scale and color image conversions (determined by the number of channels in the *if* statement).
@code{.m}
cv::Mat greyMat;
cv::cvtColor(inputMat, greyMat, COLOR_BGR2GRAY);
@endcode
After the processing we need to convert it back to UIImage.
@code{.m}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
@endcode
Output
--------
![](images/output.jpg)
Check out an instance of running code with more Image Effects on
[YouTube](http://www.youtube.com/watch?v=Ko3K_xdhJ1I) .
@youtube{Ko3K_xdhJ1I}

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

View File

@@ -0,0 +1,79 @@
Installation in iOS {#tutorial_ios_install}
===================
@tableofcontents
@next_tutorial{tutorial_hello}
| | |
| -: | :- |
| Original author | Artem Myagkov, Eduard Feicho, Steve Nicholson |
| Compatibility | OpenCV >= 3.0 |
@warning
This tutorial can contain obsolete information.
Required Packages
-----------------
- CMake 2.8.8 or higher
- Xcode 4.2 or higher
### Getting the Cutting-edge OpenCV from Git Repository
Launch Git client and clone OpenCV repository from [GitHub](http://github.com/opencv/opencv).
In MacOS it can be done using the following command in Terminal:
@code{.bash}
cd ~/<my_working _directory>
git clone https://github.com/opencv/opencv.git
@endcode
If you want to install OpenCVs extra modules, clone the opencv_contrib repository as well:
@code{.bash}
cd ~/<my_working _directory>
git clone https://github.com/opencv/opencv_contrib.git
@endcode
Building OpenCV from Source, using CMake and Command Line
---------------------------------------------------------
1. Make sure the xcode command line tools are installed:
@code{.bash}
xcode-select --install
@endcode
2. Build OpenCV framework:
@code{.bash}
cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios
@endcode
3. To install OpenCVs extra modules, append `--contrib opencv_contrib` to the python command above. **Note:** the extra modules are not included in the iOS Pack download at [OpenCV Releases](https://opencv.org/releases/). If you want to use the extra modules (e.g. aruco), you must build OpenCV yourself and include this option:
@code{.bash}
cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib
@endcode
4. To exclude a specific module, append `--without <module_name>`. For example, to exclude the "optflow" module from opencv_contrib:
@code{.bash}
cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib --without optflow
@endcode
5. The build process can take a significant amount of time. Currently (OpenCV 3.4 and 4.1), five separate architectures are built: armv7, armv7s, and arm64 for iOS plus i386 and x86_64 for the iPhone simulator. If you want to specify the architectures to include in the framework, use the `--iphoneos_archs` and/or `--iphonesimulator_archs` options. For example, to only build arm64 for iOS and x86_64 for the simulator:
@code{.bash}
cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib --iphoneos_archs arm64 --iphonesimulator_archs x86_64
@endcode
If everythings fine, the build process will create
`~/<my_working_directory>/ios/opencv2.framework`. You can add this framework to your Xcode projects.
Further Reading
---------------
You can find several OpenCV+iOS tutorials here @ref tutorial_table_of_content_ios.

View File

@@ -0,0 +1,6 @@
OpenCV iOS {#tutorial_table_of_content_ios}
==========
- @subpage tutorial_ios_install
- @subpage tutorial_hello
- @subpage tutorial_image_manipulation
- @subpage tutorial_video_processing

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -0,0 +1,215 @@
OpenCV iOS - Video Processing {#tutorial_video_processing}
=============================
@tableofcontents
@prev_tutorial{tutorial_image_manipulation}
| | |
| -: | :- |
| Original author | Eduard Feicho |
| Compatibility | OpenCV >= 3.0 |
This tutorial explains how to process video frames using the iPhone's camera and OpenCV.
Prerequisites:
--------------
- Xcode 4.3 or higher
- Basic knowledge of iOS programming (Objective-C, Interface Builder)
Including OpenCV library in your iOS project
--------------------------------------------
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your
XCode project. Download the latest binary from
<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>. Alternatively follow this
guide @ref tutorial_ios_install to compile the framework manually. Once you have the framework, just
drag-and-drop into XCode:
![](images/xcode_hello_ios_framework_drag_and_drop.png)
Also you have to locate the prefix header that is used for all header files in the project. The file
is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add
an include statement to import the opencv library. However, make sure you include opencv before you
include UIKit and Foundation, because else you will get some weird compile errors that some macros
like min and max are defined multiple times. For example the prefix header could look like the
following:
@code{.objc}
//
// Prefix header for all source files of the 'VideoFilters' target in the 'VideoFilters' project
//
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#endif
@endcode
### Example video frame processing project
#### User Interface
First, we create a simple iOS project, for example Single View Application. Then, we create and add
an UIImageView and UIButton to start the camera and display the video frames. The storyboard could
look like that:
![](images/xcode_hello_ios_viewcontroller_layout.png)
Make sure to add and connect the IBOutlets and IBActions to the corresponding ViewController:
@code{.objc}
@interface ViewController : UIViewController
{
IBOutlet UIImageView* imageView;
IBOutlet UIButton* button;
}
- (IBAction)actionStart:(id)sender;
@end
@endcode
#### Adding the Camera
We add a camera controller to the view controller and initialize it when the view has loaded:
@code{.objc}
#import <opencv2/videoio/cap_ios.h>
using namespace cv;
@interface ViewController : UIViewController
{
...
CvVideoCamera* videoCamera;
}
...
@property (nonatomic, retain) CvVideoCamera* videoCamera;
@end
@endcode
@code{.objc}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscale = NO;
}
@endcode
In this case, we initialize the camera and provide the imageView as a target for rendering each
frame. CvVideoCamera is basically a wrapper around AVFoundation, so we provide as properties some of
the AVFoundation camera options. For example we want to use the front camera, set the video size to
352x288 and a video orientation (the video camera normally outputs in landscape mode, which results
in transposed data when you design a portrait application).
The property defaultFPS sets the FPS of the camera. If the processing is less fast than the desired
FPS, frames are automatically dropped.
The property grayscale=YES results in a different colorspace, namely "YUV (YpCbCr 4:2:0)", while
grayscale=NO will output 32 bit BGRA.
Additionally, we have to manually add framework dependencies of the opencv framework. Finally, you
should have at least the following frameworks in your project:
- opencv2
- Accelerate
- AssetsLibrary
- AVFoundation
- CoreGraphics
- CoreImage
- CoreMedia
- CoreVideo
- QuartzCore
- UIKit
- Foundation
![](images/xcode_hello_ios_frameworks_add_dependencies.png)
#### Processing frames
We follow the delegation pattern, which is very common in iOS, to provide access to each camera
frame. Basically, the View Controller has to implement the CvVideoCameraDelegate protocol and has to
be set as delegate to the video camera:
@code{.objc}
@interface ViewController : UIViewController<CvVideoCameraDelegate>
@endcode
@code{.objc}
- (void)viewDidLoad
{
...
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.delegate = self;
...
}
@endcode
@code{.objc}
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
// Do some OpenCV stuff with the image
}
#endif
@endcode
Note that we are using C++ here (cv::Mat). Important: You have to rename the view controller's
extension .m into .mm, so that the compiler compiles it under the assumption of Objective-C++
(Objective-C and C++ mixed). Then, __cplusplus is defined when the compiler is processing the file
for C++ code. Therefore, we put our code within a block where __cplusplus is defined.
#### Basic video processing
From here you can start processing video frames. For example the following snippet color-inverts the
image:
@code{.objc}
- (void)processImage:(Mat&)image;
{
// Do some OpenCV stuff with the image
Mat image_copy;
cvtColor(image, image_copy, COLOR_BGR2GRAY);
// invert image
bitwise_not(image_copy, image_copy);
//Convert BGR to BGRA (three channel to four channel)
Mat bgr;
cvtColor(image_copy, bgr, COLOR_GRAY2BGR);
cvtColor(bgr, image, COLOR_BGR2BGRA);
}
@endcode
#### Start!
Finally, we have to tell the camera to actually start/stop working. The following code will start
the camera when you press the button, assuming you connected the UI properly:
@code{.objc}
#pragma mark - UI Actions
- (IBAction)actionStart:(id)sender;
{
[self.videoCamera start];
}
@endcode
#### Hints
Try to avoid costly matrix copy operations as much as you can, especially if you are aiming for
real-time. As the image data is passed as reference, work in-place, if possible.
When you are working on grayscale data, turn set grayscale = YES as the YUV colorspace gives you
directly access the luminance plane.
The Accelerate framework provides some CPU-accelerated DSP filters, which come handy in your case.