zoukankan      html  css  js  c++  java
  • 记录配置faster rcnn(caffe)CPU版本遇到的问题

    运行Faster-Rcnn代码
    与Faster-Rcnn一样Faster-Rcnn官方也是采用caffe作为框架.
    首先将项目克隆到本地(需要挂代理)

    Make sure to clone with –recursive

    git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git
    然后进入lib目录执行make(如果是python3环境下 需要切换到Python2 修改make配置文件)
    cd $FRCN_ROOT/lib
    make
    注意 faster-rcnn项目不能使用中文路径 ,把它放在home目录才能正常make
    修改setup.py配置文件 仅CPU版本

    # --------------------------------------------------------
    # Fast R-CNN
    # Copyright (c) 2015 Microsoft
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Ross Girshick
    # --------------------------------------------------------
    
    import os
    from os.path import join as pjoin
    from setuptools import setup
    from distutils.extension import Extension
    from Cython.Distutils import build_ext
    import subprocess
    import numpy as np
    
    def find_in_path(name, path):
        "Find a file in a search path"
        # Adapted fom
        # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
        for dir in path.split(os.pathsep):
            binpath = pjoin(dir, name)
            if os.path.exists(binpath):
                return os.path.abspath(binpath)
        return None
    
    
    def locate_cuda():
        """Locate the CUDA environment on the system
    
        Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
        and values giving the absolute path to each directory.
    
        Starts by looking for the CUDAHOME env variable. If not found, everything
        is based on finding 'nvcc' in the PATH.
        """
    
        # first check if the CUDAHOME env variable is in use
        if 'CUDAHOME' in os.environ:
            home = os.environ['CUDAHOME']
            nvcc = pjoin(home, 'bin', 'nvcc')
        else:
            # otherwise, search the PATH for NVCC
            default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')
            nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path)
            if nvcc is None:
                raise EnvironmentError('The nvcc binary could not be '
                    'located in your $PATH. Either add it to your path, or set $CUDAHOME')
            home = os.path.dirname(os.path.dirname(nvcc))
    
        cudaconfig = {'home':home, 'nvcc':nvcc,
                      'include': pjoin(home, 'include'),
                      'lib64': pjoin(home, 'lib64')}
        for k, v in cudaconfig.iteritems():
            if not os.path.exists(v):
                raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
    
        return cudaconfig
    #CUDA = locate_cuda()
    
    
    # Obtain the numpy include directory.  This logic works across numpy versions.
    try:
        numpy_include = np.get_include()
    except AttributeError:
        numpy_include = np.get_numpy_include()
    
    def customize_compiler_for_nvcc(self):
        """inject deep into distutils to customize how the dispatch
        to gcc/nvcc works.
    
        If you subclass UnixCCompiler, it's not trivial to get your subclass
        injected in, and still have the right customizations (i.e.
        distutils.sysconfig.customize_compiler) run on it. So instead of going
        the OO route, I have this. Note, it's kindof like a wierd functional
        subclassing going on."""
    
        # tell the compiler it can processes .cu
        self.src_extensions.append('.cu')
    
        # save references to the default compiler_so and _comple methods
        default_compiler_so = self.compiler_so
        super = self._compile
    
        # now redefine the _compile method. This gets executed for each
        # object but distutils doesn't have the ability to change compilers
        # based on source extension: we add it.
        def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
            if os.path.splitext(src)[1] == '.cu':
                # use the cuda for .cu files
                #self.set_executable('compiler_so', CUDA['nvcc'])
                # use only a subset of the extra_postargs, which are 1-1 translated
                # from the extra_compile_args in the Extension class
                postargs = extra_postargs['nvcc']
            else:
                postargs = extra_postargs['gcc']
    
            super(obj, src, ext, cc_args, postargs, pp_opts)
            # reset the default compiler_so, which we might have changed for cuda
            self.compiler_so = default_compiler_so
    
        # inject our redefined _compile method into the class
        self._compile = _compile
    
    
    # run the customize_compiler
    class custom_build_ext(build_ext):
        def build_extensions(self):
            customize_compiler_for_nvcc(self.compiler)
            build_ext.build_extensions(self)
    
    
    ext_modules = [
        Extension(
            "utils.cython_bbox",
            ["utils/bbox.pyx"],
            extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
            include_dirs = [numpy_include]
        ),
        Extension(
            "nms.cpu_nms",
            ["nms/cpu_nms.pyx"],
            extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
            include_dirs = [numpy_include]
        ),
        #Extension('nms.gpu_nms',
            #['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],
            #library_dirs=[CUDA['lib64']],
            #libraries=['cudart'],
            #language='c++',
            #runtime_library_dirs=[CUDA['lib64']],
            # this syntax is specific to this build system
            # we're only going to use certain compiler args with nvcc and not with
            # gcc the implementation of this trick is in customize_compiler() below
            #extra_compile_args={'gcc': ["-Wno-unused-function"],
            #                    'nvcc': ['-arch=sm_35',
            #                             '--ptxas-options=-v',
            #                            '-c',
            #                             '--compiler-options',
            #                             "'-fPIC'"]},
            #include_dirs = [numpy_include, CUDA['include']]
        #),
        Extension(
            'pycocotools._mask',
            sources=['pycocotools/maskApi.c', 'pycocotools/_mask.pyx'],
            include_dirs = [numpy_include, 'pycocotools'],
            extra_compile_args={
                'gcc': ['-Wno-cpp', '-Wno-unused-function', '-std=c99']},
        ),
    ]
    
    setup(
        name='fast_rcnn',
        ext_modules=ext_modules,
        # inject our custom trigger
        cmdclass={'build_ext': custom_build_ext},
    )
    

    由于我使用的是cpu版本,需要对makefile.config进行配置,然后 执行命令 make -j16
    修改makefile.config如下:

    ## Refer to http://caffe.berkeleyvision.org/installation.html
    # Contributions simplifying and improving our build system are welcome!
    
    # cuDNN acceleration switch (uncomment to build with cuDNN).
    # USE_CUDNN := 1
    
    # CPU-only switch (uncomment to build without GPU support).
     CPU_ONLY := 1
    
    # To customize your choice of compiler, uncomment and set the following.
    # N.B. the default for Linux is g++ and the default for OSX is clang++
    # CUSTOM_CXX := g++
    
    # CUDA directory contains bin/ and lib/ directories that we need.
    CUDA_DIR := /usr/local/cuda
    # On Ubuntu 14.04, if cuda tools are installed via
    # "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
    # CUDA_DIR := /usr
    
    # CUDA architecture setting: going with all of them.
    # For CUDA < 6.0, comment the *_50 lines for compatibility.
    CUDA_ARCH := -gencode arch=compute_20,code=sm_20 
    		-gencode arch=compute_20,code=sm_21 
    		-gencode arch=compute_30,code=sm_30 
    		-gencode arch=compute_35,code=sm_35 
    		-gencode arch=compute_50,code=sm_50 
    		-gencode arch=compute_50,code=compute_50
    
    # BLAS choice:
    # atlas for ATLAS (default)
    # mkl for MKL
    # open for OpenBlas
    BLAS := atlas
    # Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
    # Leave commented to accept the defaults for your choice of BLAS
    # (which should work)!
    # BLAS_INCLUDE := /opt/OpenBLAS/include
    # BLAS_LIB := /opt/OpenBLAS/lib
    
    # This is required only if you will compile the matlab interface.
    # MATLAB directory should contain the mex binary in /bin.
    # MATLAB_DIR := /usr/local
    # MATLAB_DIR := /Applications/MATLAB_R2012b.app
    
    # NOTE: this is required only if you will compile the python interface.
    # We need to be able to find Python.h and numpy/arrayobject.h.
    PYTHON_INCLUDE := /usr/include/python2.7 
    		/usr/lib/python2.7/dist-packages/numpy/core/include
    # Anaconda Python distribution is quite popular. Include path:
    # Verify anaconda location, sometimes it's in root.
    # ANACONDA_HOME := $(HOME)/anaconda
    # PYTHON_INCLUDE := $(ANACONDA_HOME)/include 
    		# $(ANACONDA_HOME)/include/python2.7 
    		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include 
    
    # We need to be able to find libpythonX.X.so or .dylib.
    PYTHON_LIB := /usr/lib
    # PYTHON_LIB := $(ANACONDA_HOME)/lib
    
    # Uncomment to support layers written in Python (will link against Python libs)
    # This will require an additional dependency boost_regex provided by boost.
     WITH_PYTHON_LAYER := 1
    
    # Whatever else you find you need goes here.
    INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include  /usr/include/hdf5/serial/
    LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
    
    # Uncomment to use `pkg-config` to specify OpenCV library paths.
    # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
    # USE_PKG_CONFIG := 1
    
    BUILD_DIR := build
    DISTRIBUTE_DIR := distribute
    
    # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
    # DEBUG := 1
    
    # The ID of the GPU that 'make runtest' will use to run unit tests.
    TEST_GPUID := 0
    
    # enable pretty build (comment to see full commands)
    Q ?= @
    

    修改Makefile如下:

    PROJECT := caffe
    
    CONFIG_FILE := Makefile.config
    # Explicitly check for the config file, otherwise make -k will proceed anyway.
    ifeq ($(wildcard $(CONFIG_FILE)),)
    $(error $(CONFIG_FILE) not found. See $(CONFIG_FILE).example.)
    endif
    include $(CONFIG_FILE)
    
    BUILD_DIR_LINK := $(BUILD_DIR)
    ifeq ($(RELEASE_BUILD_DIR),)
    	RELEASE_BUILD_DIR := .$(BUILD_DIR)_release
    endif
    ifeq ($(DEBUG_BUILD_DIR),)
    	DEBUG_BUILD_DIR := .$(BUILD_DIR)_debug
    endif
    
    DEBUG ?= 0
    ifeq ($(DEBUG), 1)
    	BUILD_DIR := $(DEBUG_BUILD_DIR)
    	OTHER_BUILD_DIR := $(RELEASE_BUILD_DIR)
    else
    	BUILD_DIR := $(RELEASE_BUILD_DIR)
    	OTHER_BUILD_DIR := $(DEBUG_BUILD_DIR)
    endif
    
    # All of the directories containing code.
    SRC_DIRS := $(shell find * -type d -exec bash -c "find {} -maxdepth 1 
    	( -name '*.cpp' -o -name '*.proto' ) | grep -q ." ; -print)
    
    # The target shared library name
    LIBRARY_NAME := $(PROJECT)
    LIB_BUILD_DIR := $(BUILD_DIR)/lib
    STATIC_NAME := $(LIB_BUILD_DIR)/lib$(LIBRARY_NAME).a
    DYNAMIC_VERSION_MAJOR 		:= 1
    DYNAMIC_VERSION_MINOR 		:= 0
    DYNAMIC_VERSION_REVISION 	:= 0-rc3
    DYNAMIC_NAME_SHORT := lib$(LIBRARY_NAME).so
    #DYNAMIC_SONAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR)
    DYNAMIC_VERSIONED_NAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
    DYNAMIC_NAME := $(LIB_BUILD_DIR)/$(DYNAMIC_VERSIONED_NAME_SHORT)
    COMMON_FLAGS += -DCAFFE_VERSION=$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
    
    ##############################
    # Get all source files
    ##############################
    # CXX_SRCS are the source files excluding the test ones.
    CXX_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cpp" -name "*.cpp")
    # CU_SRCS are the cuda source files
    CU_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cu" -name "*.cu")
    # TEST_SRCS are the test source files
    TEST_MAIN_SRC := src/$(PROJECT)/test/test_caffe_main.cpp
    TEST_SRCS := $(shell find src/$(PROJECT) -name "test_*.cpp")
    TEST_SRCS := $(filter-out $(TEST_MAIN_SRC), $(TEST_SRCS))
    TEST_CU_SRCS := $(shell find src/$(PROJECT) -name "test_*.cu")
    GTEST_SRC := src/gtest/gtest-all.cpp
    # TOOL_SRCS are the source files for the tool binaries
    TOOL_SRCS := $(shell find tools -name "*.cpp")
    # EXAMPLE_SRCS are the source files for the example binaries
    EXAMPLE_SRCS := $(shell find examples -name "*.cpp")
    # BUILD_INCLUDE_DIR contains any generated header files we want to include.
    BUILD_INCLUDE_DIR := $(BUILD_DIR)/src
    # PROTO_SRCS are the protocol buffer definitions
    PROTO_SRC_DIR := src/$(PROJECT)/proto
    PROTO_SRCS := $(wildcard $(PROTO_SRC_DIR)/*.proto)
    # PROTO_BUILD_DIR will contain the .cc and obj files generated from
    # PROTO_SRCS; PROTO_BUILD_INCLUDE_DIR will contain the .h header files
    PROTO_BUILD_DIR := $(BUILD_DIR)/$(PROTO_SRC_DIR)
    PROTO_BUILD_INCLUDE_DIR := $(BUILD_INCLUDE_DIR)/$(PROJECT)/proto
    # NONGEN_CXX_SRCS includes all source/header files except those generated
    # automatically (e.g., by proto).
    NONGEN_CXX_SRCS := $(shell find 
    	src/$(PROJECT) 
    	include/$(PROJECT) 
    	python/$(PROJECT) 
    	matlab/+$(PROJECT)/private 
    	examples 
    	tools 
    	-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh")
    LINT_SCRIPT := scripts/cpp_lint.py
    LINT_OUTPUT_DIR := $(BUILD_DIR)/.lint
    LINT_EXT := lint.txt
    LINT_OUTPUTS := $(addsuffix .$(LINT_EXT), $(addprefix $(LINT_OUTPUT_DIR)/, $(NONGEN_CXX_SRCS)))
    EMPTY_LINT_REPORT := $(BUILD_DIR)/.$(LINT_EXT)
    NONEMPTY_LINT_REPORT := $(BUILD_DIR)/$(LINT_EXT)
    # PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)
    PY$(PROJECT)_SRC := python/$(PROJECT)/_$(PROJECT).cpp
    PY$(PROJECT)_SO := python/$(PROJECT)/_$(PROJECT).so
    PY$(PROJECT)_HXX := include/$(PROJECT)/layers/python_layer.hpp
    # MAT$(PROJECT)_SRC is the mex entrance point of matlab package for $(PROJECT)
    MAT$(PROJECT)_SRC := matlab/+$(PROJECT)/private/$(PROJECT)_.cpp
    ifneq ($(MATLAB_DIR),)
    	MAT_SO_EXT := $(shell $(MATLAB_DIR)/bin/mexext)
    endif
    MAT$(PROJECT)_SO := matlab/+$(PROJECT)/private/$(PROJECT)_.$(MAT_SO_EXT)
    
    ##############################
    # Derive generated files
    ##############################
    # The generated files for protocol buffers
    PROTO_GEN_HEADER_SRCS := $(addprefix $(PROTO_BUILD_DIR)/, 
    		$(notdir ${PROTO_SRCS:.proto=.pb.h}))
    PROTO_GEN_HEADER := $(addprefix $(PROTO_BUILD_INCLUDE_DIR)/, 
    		$(notdir ${PROTO_SRCS:.proto=.pb.h}))
    PROTO_GEN_CC := $(addprefix $(BUILD_DIR)/, ${PROTO_SRCS:.proto=.pb.cc})
    PY_PROTO_BUILD_DIR := python/$(PROJECT)/proto
    PY_PROTO_INIT := python/$(PROJECT)/proto/__init__.py
    PROTO_GEN_PY := $(foreach file,${PROTO_SRCS:.proto=_pb2.py}, 
    		$(PY_PROTO_BUILD_DIR)/$(notdir $(file)))
    # The objects corresponding to the source files
    # These objects will be linked into the final shared library, so we
    # exclude the tool, example, and test objects.
    CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o})
    CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o})
    PROTO_OBJS := ${PROTO_GEN_CC:.cc=.o}
    OBJS := $(PROTO_OBJS) $(CXX_OBJS) $(CU_OBJS)
    # tool, example, and test objects
    TOOL_OBJS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o})
    TOOL_BUILD_DIR := $(BUILD_DIR)/tools
    TEST_CXX_BUILD_DIR := $(BUILD_DIR)/src/$(PROJECT)/test
    TEST_CU_BUILD_DIR := $(BUILD_DIR)/cuda/src/$(PROJECT)/test
    TEST_CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o})
    TEST_CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o})
    TEST_OBJS := $(TEST_CXX_OBJS) $(TEST_CU_OBJS)
    GTEST_OBJ := $(addprefix $(BUILD_DIR)/, ${GTEST_SRC:.cpp=.o})
    EXAMPLE_OBJS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o})
    # Output files for automatic dependency generation
    DEPS := ${CXX_OBJS:.o=.d} ${CU_OBJS:.o=.d} ${TEST_CXX_OBJS:.o=.d} 
    	${TEST_CU_OBJS:.o=.d} $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}
    # tool, example, and test bins
    TOOL_BINS := ${TOOL_OBJS:.o=.bin}
    EXAMPLE_BINS := ${EXAMPLE_OBJS:.o=.bin}
    # symlinks to tool bins without the ".bin" extension
    TOOL_BIN_LINKS := ${TOOL_BINS:.bin=}
    # Put the test binaries in build/test for convenience.
    TEST_BIN_DIR := $(BUILD_DIR)/test
    TEST_CU_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, 
    		$(foreach obj,$(TEST_CU_OBJS),$(basename $(notdir $(obj))))))
    TEST_CXX_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, 
    		$(foreach obj,$(TEST_CXX_OBJS),$(basename $(notdir $(obj))))))
    TEST_BINS := $(TEST_CXX_BINS) $(TEST_CU_BINS)
    # TEST_ALL_BIN is the test binary that links caffe dynamically.
    TEST_ALL_BIN := $(TEST_BIN_DIR)/test_all.testbin
    
    ##############################
    # Derive compiler warning dump locations
    ##############################
    WARNS_EXT := warnings.txt
    CXX_WARNS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o.$(WARNS_EXT)})
    CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o.$(WARNS_EXT)})
    TOOL_WARNS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o.$(WARNS_EXT)})
    EXAMPLE_WARNS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o.$(WARNS_EXT)})
    TEST_WARNS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o.$(WARNS_EXT)})
    TEST_CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o.$(WARNS_EXT)})
    ALL_CXX_WARNS := $(CXX_WARNS) $(TOOL_WARNS) $(EXAMPLE_WARNS) $(TEST_WARNS)
    ALL_CU_WARNS := $(CU_WARNS) $(TEST_CU_WARNS)
    ALL_WARNS := $(ALL_CXX_WARNS) $(ALL_CU_WARNS)
    
    EMPTY_WARN_REPORT := $(BUILD_DIR)/.$(WARNS_EXT)
    NONEMPTY_WARN_REPORT := $(BUILD_DIR)/$(WARNS_EXT)
    
    ##############################
    # Derive include and lib directories
    ##############################
    CUDA_INCLUDE_DIR := $(CUDA_DIR)/include
    
    CUDA_LIB_DIR :=
    # add <cuda>/lib64 only if it exists
    ifneq ("$(wildcard $(CUDA_DIR)/lib64)","")
    	CUDA_LIB_DIR += $(CUDA_DIR)/lib64
    endif
    CUDA_LIB_DIR += $(CUDA_DIR)/lib
    
    INCLUDE_DIRS += $(BUILD_INCLUDE_DIR) ./src ./include
    ifneq ($(CPU_ONLY), 1)
    	INCLUDE_DIRS += $(CUDA_INCLUDE_DIR)
    	LIBRARY_DIRS += $(CUDA_LIB_DIR)
    	LIBRARIES := cudart cublas curand
    endif
    
    LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
    
    # handle IO dependencies
    USE_LEVELDB ?= 1
    USE_LMDB ?= 1
    USE_OPENCV ?= 1
    
    ifeq ($(USE_LEVELDB), 1)
    	LIBRARIES += leveldb snappy
    endif
    ifeq ($(USE_LMDB), 1)
    	LIBRARIES += lmdb
    endif
    ifeq ($(USE_OPENCV), 1)
    	LIBRARIES += opencv_core opencv_highgui opencv_imgproc 
    
    	ifeq ($(OPENCV_VERSION), 3)
    		LIBRARIES += opencv_imgcodecs
    	endif
    		
    endif
    PYTHON_LIBRARIES ?= boost_python python2.7
    WARNINGS := -Wall -Wno-sign-compare
    
    ##############################
    # Set build directories
    ##############################
    
    DISTRIBUTE_DIR ?= distribute
    DISTRIBUTE_SUBDIRS := $(DISTRIBUTE_DIR)/bin $(DISTRIBUTE_DIR)/lib
    DIST_ALIASES := dist
    ifneq ($(strip $(DISTRIBUTE_DIR)),distribute)
    		DIST_ALIASES += distribute
    endif
    
    ALL_BUILD_DIRS := $(sort $(BUILD_DIR) $(addprefix $(BUILD_DIR)/, $(SRC_DIRS)) 
    	$(addprefix $(BUILD_DIR)/cuda/, $(SRC_DIRS)) 
    	$(LIB_BUILD_DIR) $(TEST_BIN_DIR) $(PY_PROTO_BUILD_DIR) $(LINT_OUTPUT_DIR) 
    	$(DISTRIBUTE_SUBDIRS) $(PROTO_BUILD_INCLUDE_DIR))
    
    ##############################
    # Set directory for Doxygen-generated documentation
    ##############################
    DOXYGEN_CONFIG_FILE ?= ./.Doxyfile
    # should be the same as OUTPUT_DIRECTORY in the .Doxyfile
    DOXYGEN_OUTPUT_DIR ?= ./doxygen
    DOXYGEN_COMMAND ?= doxygen
    # All the files that might have Doxygen documentation.
    DOXYGEN_SOURCES := $(shell find 
    	src/$(PROJECT) 
    	include/$(PROJECT) 
    	python/ 
    	matlab/ 
    	examples 
    	tools 
    	-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh" -or 
            -name "*.py" -or -name "*.m")
    DOXYGEN_SOURCES += $(DOXYGEN_CONFIG_FILE)
    
    
    ##############################
    # Configure build
    ##############################
    
    # Determine platform
    UNAME := $(shell uname -s)
    ifeq ($(UNAME), Linux)
    	LINUX := 1
    else ifeq ($(UNAME), Darwin)
    	OSX := 1
    endif
    
    # Linux
    ifeq ($(LINUX), 1)
    	CXX ?= /usr/bin/g++
    	GCCVERSION := $(shell $(CXX) -dumpversion | cut -f1,2 -d.)
    	# older versions of gcc are too dumb to build boost with -Wuninitalized
    	ifeq ($(shell echo | awk '{exit $(GCCVERSION) < 4.6;}'), 1)
    		WARNINGS += -Wno-uninitialized
    	endif
    	# boost::thread is reasonably called boost_thread (compare OS X)
    	# We will also explicitly add stdc++ to the link target.
    	LIBRARIES += boost_thread stdc++
    	VERSIONFLAGS += -Wl,-soname,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../lib
    endif
    
    # OS X:
    # clang++ instead of g++
    # libstdc++ for NVCC compatibility on OS X >= 10.9 with CUDA < 7.0
    ifeq ($(OSX), 1)
    	CXX := /usr/bin/clang++
    	ifneq ($(CPU_ONLY), 1)
    		CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release d' | grep -o 'd')
    		ifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)
    			CXXFLAGS += -stdlib=libstdc++
    			LINKFLAGS += -stdlib=libstdc++
    		endif
    		# clang throws this warning for cuda headers
    		WARNINGS += -Wno-unneeded-internal-declaration
    	endif
    	# gtest needs to use its own tuple to not conflict with clang
    	COMMON_FLAGS += -DGTEST_USE_OWN_TR1_TUPLE=1
    	# boost::thread is called boost_thread-mt to mark multithreading on OS X
    	LIBRARIES += boost_thread-mt
    	# we need to explicitly ask for the rpath to be obeyed
    	DYNAMIC_FLAGS := -install_name @rpath/libcaffe.so
    	ORIGIN := @loader_path
    	VERSIONFLAGS += -Wl,-install_name,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../../build/lib
    else
    	ORIGIN := $$ORIGIN
    endif
    
    # Custom compiler
    ifdef CUSTOM_CXX
    	CXX := $(CUSTOM_CXX)
    endif
    
    # Static linking
    ifneq (,$(findstring clang++,$(CXX)))
    	STATIC_LINK_COMMAND := -Wl,-force_load $(STATIC_NAME)
    else ifneq (,$(findstring g++,$(CXX)))
    	STATIC_LINK_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive
    else
      # The following line must not be indented with a tab, since we are not inside a target
      $(error Cannot static link with the $(CXX) compiler)
    endif
    
    # Debugging
    ifeq ($(DEBUG), 1)
    	COMMON_FLAGS += -DDEBUG -g -O0
    	NVCCFLAGS += -G
    else
    	COMMON_FLAGS += -DNDEBUG -O2
    endif
    
    # cuDNN acceleration configuration.
    ifeq ($(USE_CUDNN), 1)
    	LIBRARIES += cudnn
    	COMMON_FLAGS += -DUSE_CUDNN
    endif
    
    # configure IO libraries
    ifeq ($(USE_OPENCV), 1)
    	COMMON_FLAGS += -DUSE_OPENCV
    endif
    ifeq ($(USE_LEVELDB), 1)
    	COMMON_FLAGS += -DUSE_LEVELDB
    endif
    ifeq ($(USE_LMDB), 1)
    	COMMON_FLAGS += -DUSE_LMDB
    ifeq ($(ALLOW_LMDB_NOLOCK), 1)
    	COMMON_FLAGS += -DALLOW_LMDB_NOLOCK
    endif
    endif
    
    # CPU-only configuration
    ifeq ($(CPU_ONLY), 1)
    	OBJS := $(PROTO_OBJS) $(CXX_OBJS)
    	TEST_OBJS := $(TEST_CXX_OBJS)
    	TEST_BINS := $(TEST_CXX_BINS)
    	ALL_WARNS := $(ALL_CXX_WARNS)
    	TEST_FILTER := --gtest_filter="-*GPU*"
    	COMMON_FLAGS += -DCPU_ONLY
    endif
    
    # Python layer support
    ifeq ($(WITH_PYTHON_LAYER), 1)
    	COMMON_FLAGS += -DWITH_PYTHON_LAYER
    	LIBRARIES += $(PYTHON_LIBRARIES)
    endif
    
    # BLAS configuration (default = ATLAS)
    BLAS ?= atlas
    ifeq ($(BLAS), mkl)
    	# MKL
    	LIBRARIES += mkl_rt
    	COMMON_FLAGS += -DUSE_MKL
    	MKL_DIR ?= /opt/intel/mkl
    	BLAS_INCLUDE ?= $(MKL_DIR)/include
    	BLAS_LIB ?= $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64
    else ifeq ($(BLAS), open)
    	# OpenBLAS
    	LIBRARIES += openblas
    else
    	# ATLAS
    	ifeq ($(LINUX), 1)
    		ifeq ($(BLAS), atlas)
    			# Linux simply has cblas and atlas
    			LIBRARIES += cblas atlas
    		endif
    	else ifeq ($(OSX), 1)
    		# OS X packages atlas as the vecLib framework
    		LIBRARIES += cblas
    		# 10.10 has accelerate while 10.9 has veclib
    		XCODE_CLT_VER := $(shell pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep 'version' | sed 's/[^0-9]*([0-9]).*/1/')
    		XCODE_CLT_GEQ_6 := $(shell [ $(XCODE_CLT_VER) -gt 5 ] && echo 1)
    		ifeq ($(XCODE_CLT_GEQ_6), 1)
    			BLAS_INCLUDE ?= /System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
    			LDFLAGS += -framework Accelerate
    		else
    			BLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/
    			LDFLAGS += -framework vecLib
    		endif
    	endif
    endif
    INCLUDE_DIRS += $(BLAS_INCLUDE)
    LIBRARY_DIRS += $(BLAS_LIB)
    
    LIBRARY_DIRS += $(LIB_BUILD_DIR)
    
    # Automatic dependency generation (nvcc is handled separately)
    CXXFLAGS += -MMD -MP
    
    # Complete build flags.
    COMMON_FLAGS += $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
    CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
    NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
    # mex may invoke an older gcc that is too liberal with -Wuninitalized
    MATLAB_CXXFLAGS := $(CXXFLAGS) -Wno-uninitialized
    LINKFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
    
    USE_PKG_CONFIG ?= 0
    ifeq ($(USE_PKG_CONFIG), 1)
    	PKG_CONFIG := $(shell pkg-config opencv --libs)
    else
    	PKG_CONFIG :=
    endif
    LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) $(PKG_CONFIG) 
    		$(foreach library,$(LIBRARIES),-l$(library))
    PYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))
    
    # 'superclean' target recursively* deletes all files ending with an extension
    # in $(SUPERCLEAN_EXTS) below.  This may be useful if you've built older
    # versions of Caffe that do not place all generated files in a location known
    # to the 'clean' target.
    #
    # 'supercleanlist' will list the files to be deleted by make superclean.
    #
    # * Recursive with the exception that symbolic links are never followed, per the
    # default behavior of 'find'.
    SUPERCLEAN_EXTS := .so .a .o .bin .testbin .pb.cc .pb.h _pb2.py .cuo
    
    # Set the sub-targets of the 'everything' target.
    EVERYTHING_TARGETS := all py$(PROJECT) test warn lint
    # Only build matcaffe as part of "everything" if MATLAB_DIR is specified.
    ifneq ($(MATLAB_DIR),)
    	EVERYTHING_TARGETS += mat$(PROJECT)
    endif
    
    ##############################
    # Define build targets
    ##############################
    .PHONY: all lib test clean docs linecount lint lintclean tools examples $(DIST_ALIASES) 
    	py mat py$(PROJECT) mat$(PROJECT) proto runtest 
    	superclean supercleanlist supercleanfiles warn everything
    
    all: lib tools examples
    
    lib: $(STATIC_NAME) $(DYNAMIC_NAME)
    
    everything: $(EVERYTHING_TARGETS)
    
    linecount:
    	cloc --read-lang-def=$(PROJECT).cloc 
    		src/$(PROJECT) include/$(PROJECT) tools examples 
    		python matlab
    
    lint: $(EMPTY_LINT_REPORT)
    
    lintclean:
    	@ $(RM) -r $(LINT_OUTPUT_DIR) $(EMPTY_LINT_REPORT) $(NONEMPTY_LINT_REPORT)
    
    docs: $(DOXYGEN_OUTPUT_DIR)
    	@ cd ./docs ; ln -sfn ../$(DOXYGEN_OUTPUT_DIR)/html doxygen
    
    $(DOXYGEN_OUTPUT_DIR): $(DOXYGEN_CONFIG_FILE) $(DOXYGEN_SOURCES)
    	$(DOXYGEN_COMMAND) $(DOXYGEN_CONFIG_FILE)
    
    $(EMPTY_LINT_REPORT): $(LINT_OUTPUTS) | $(BUILD_DIR)
    	@ cat $(LINT_OUTPUTS) > $@
    	@ if [ -s "$@" ]; then 
    		cat $@; 
    		mv $@ $(NONEMPTY_LINT_REPORT); 
    		echo "Found one or more lint errors."; 
    		exit 1; 
    	  fi; 
    	  $(RM) $(NONEMPTY_LINT_REPORT); 
    	  echo "No lint errors!";
    
    $(LINT_OUTPUTS): $(LINT_OUTPUT_DIR)/%.lint.txt : % $(LINT_SCRIPT) | $(LINT_OUTPUT_DIR)
    	@ mkdir -p $(dir $@)
    	@ python $(LINT_SCRIPT) $< 2>&1 
    		| grep -v "^Done processing " 
    		| grep -v "^Total errors found: 0" 
    		> $@ 
    		|| true
    
    test: $(TEST_ALL_BIN) $(TEST_ALL_DYNLINK_BIN) $(TEST_BINS)
    
    tools: $(TOOL_BINS) $(TOOL_BIN_LINKS)
    
    examples: $(EXAMPLE_BINS)
    
    py$(PROJECT): py
    
    py: $(PY$(PROJECT)_SO) $(PROTO_GEN_PY)
    
    $(PY$(PROJECT)_SO): $(PY$(PROJECT)_SRC) $(PY$(PROJECT)_HXX) | $(DYNAMIC_NAME)
    	@ echo CXX/LD -o $@ $<
    	$(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) 
    		-o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(PYTHON_LDFLAGS) 
    		-Wl,-rpath,$(ORIGIN)/../../build/lib
    
    mat$(PROJECT): mat
    
    mat: $(MAT$(PROJECT)_SO)
    
    $(MAT$(PROJECT)_SO): $(MAT$(PROJECT)_SRC) $(STATIC_NAME)
    	@ if [ -z "$(MATLAB_DIR)" ]; then 
    		echo "MATLAB_DIR must be specified in $(CONFIG_FILE)" 
    			"to build mat$(PROJECT)."; 
    		exit 1; 
    	fi
    	@ echo MEX $<
    	$(Q)$(MATLAB_DIR)/bin/mex $(MAT$(PROJECT)_SRC) 
    			CXX="$(CXX)" 
    			CXXFLAGS="$$CXXFLAGS $(MATLAB_CXXFLAGS)" 
    			CXXLIBS="$$CXXLIBS $(STATIC_LINK_COMMAND) $(LDFLAGS)" -output $@
    	@ if [ -f "$(PROJECT)_.d" ]; then 
    		mv -f $(PROJECT)_.d $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}; 
    	fi
    
    runtest: $(TEST_ALL_BIN)
    	$(TOOL_BUILD_DIR)/caffe
    	$(TEST_ALL_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER)
    
    pytest: py
    	cd python; python -m unittest discover -s caffe/test
    
    mattest: mat
    	cd matlab; $(MATLAB_DIR)/bin/matlab -nodisplay -r 'caffe.run_tests(), exit()'
    
    warn: $(EMPTY_WARN_REPORT)
    
    $(EMPTY_WARN_REPORT): $(ALL_WARNS) | $(BUILD_DIR)
    	@ cat $(ALL_WARNS) > $@
    	@ if [ -s "$@" ]; then 
    		cat $@; 
    		mv $@ $(NONEMPTY_WARN_REPORT); 
    		echo "Compiler produced one or more warnings."; 
    		exit 1; 
    	  fi; 
    	  $(RM) $(NONEMPTY_WARN_REPORT); 
    	  echo "No compiler warnings!";
    
    $(ALL_WARNS): %.o.$(WARNS_EXT) : %.o
    
    $(BUILD_DIR_LINK): $(BUILD_DIR)/.linked
    
    # Create a target ".linked" in this BUILD_DIR to tell Make that the "build" link
    # is currently correct, then delete the one in the OTHER_BUILD_DIR in case it
    # exists and $(DEBUG) is toggled later.
    $(BUILD_DIR)/.linked:
    	@ mkdir -p $(BUILD_DIR)
    	@ $(RM) $(OTHER_BUILD_DIR)/.linked
    	@ $(RM) -r $(BUILD_DIR_LINK)
    	@ ln -s $(BUILD_DIR) $(BUILD_DIR_LINK)
    	@ touch $@
    
    $(ALL_BUILD_DIRS): | $(BUILD_DIR_LINK)
    	@ mkdir -p $@
    
    $(DYNAMIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
    	@ echo LD -o $@
    	$(Q)$(CXX) -shared -o $@ $(OBJS) $(VERSIONFLAGS) $(LINKFLAGS) $(LDFLAGS) $(DYNAMIC_FLAGS)
    	@ cd $(BUILD_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT);   ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
    
    $(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
    	@ echo AR -o $@
    	$(Q)ar rcs $@ $(OBJS)
    
    $(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)
    	@ echo CXX $<
    	$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) 
    		|| (cat $@.$(WARNS_EXT); exit 1)
    	@ cat $@.$(WARNS_EXT)
    
    $(PROTO_BUILD_DIR)/%.pb.o: $(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_GEN_HEADER) 
    		| $(PROTO_BUILD_DIR)
    	@ echo CXX $<
    	$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) 
    		|| (cat $@.$(WARNS_EXT); exit 1)
    	@ cat $@.$(WARNS_EXT)
    
    $(BUILD_DIR)/cuda/%.o: %.cu | $(ALL_BUILD_DIRS)
    	@ echo NVCC $<
    	$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -M $< -o ${@:.o=.d} 
    		-odir $(@D)
    	$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@ 2> $@.$(WARNS_EXT) 
    		|| (cat $@.$(WARNS_EXT); exit 1)
    	@ cat $@.$(WARNS_EXT)
    
    $(TEST_ALL_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) 
    		| $(DYNAMIC_NAME) $(TEST_BIN_DIR)
    	@ echo CXX/LD -o $@ $<
    	$(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) 
    		-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
    
    $(TEST_CU_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CU_BUILD_DIR)/%.o 
    	$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
    	@ echo LD $<
    	$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) 
    		-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
    
    $(TEST_CXX_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CXX_BUILD_DIR)/%.o 
    	$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
    	@ echo LD $<
    	$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) 
    		-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
    
    # Target for extension-less symlinks to tool binaries with extension '*.bin'.
    $(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)
    	@ $(RM) $@
    	@ ln -s $(notdir $<) $@
    
    $(TOOL_BINS): %.bin : %.o | $(DYNAMIC_NAME)
    	@ echo CXX/LD -o $@
    	$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) 
    		-Wl,-rpath,$(ORIGIN)/../lib
    
    $(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)
    	@ echo CXX/LD -o $@
    	$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) 
    		-Wl,-rpath,$(ORIGIN)/../../lib
    
    proto: $(PROTO_GEN_CC) $(PROTO_GEN_HEADER)
    
    $(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_BUILD_DIR)/%.pb.h : 
    		$(PROTO_SRC_DIR)/%.proto | $(PROTO_BUILD_DIR)
    	@ echo PROTOC $<
    	$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --cpp_out=$(PROTO_BUILD_DIR) $<
    
    $(PY_PROTO_BUILD_DIR)/%_pb2.py : $(PROTO_SRC_DIR)/%.proto 
    		$(PY_PROTO_INIT) | $(PY_PROTO_BUILD_DIR)
    	@ echo PROTOC (python) $<
    	$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --python_out=$(PY_PROTO_BUILD_DIR) $<
    
    $(PY_PROTO_INIT): | $(PY_PROTO_BUILD_DIR)
    	touch $(PY_PROTO_INIT)
    
    clean:
    	@- $(RM) -rf $(ALL_BUILD_DIRS)
    	@- $(RM) -rf $(OTHER_BUILD_DIR)
    	@- $(RM) -rf $(BUILD_DIR_LINK)
    	@- $(RM) -rf $(DISTRIBUTE_DIR)
    	@- $(RM) $(PY$(PROJECT)_SO)
    	@- $(RM) $(MAT$(PROJECT)_SO)
    
    supercleanfiles:
    	$(eval SUPERCLEAN_FILES := $(strip 
    			$(foreach ext,$(SUPERCLEAN_EXTS), $(shell find . -name '*$(ext)' 
    			-not -path './data/*'))))
    
    supercleanlist: supercleanfiles
    	@ 
    	if [ -z "$(SUPERCLEAN_FILES)" ]; then 
    		echo "No generated files found."; 
    	else 
    		echo $(SUPERCLEAN_FILES) | tr ' ' '
    '; 
    	fi
    
    superclean: clean supercleanfiles
    	@ 
    	if [ -z "$(SUPERCLEAN_FILES)" ]; then 
    		echo "No generated files found."; 
    	else 
    		echo "Deleting the following generated files:"; 
    		echo $(SUPERCLEAN_FILES) | tr ' ' '
    '; 
    		$(RM) $(SUPERCLEAN_FILES); 
    	fi
    
    $(DIST_ALIASES): $(DISTRIBUTE_DIR)
    
    $(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
    	# add proto
    	cp -r src/caffe/proto $(DISTRIBUTE_DIR)/
    	# add include
    	cp -r include $(DISTRIBUTE_DIR)/
    	mkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto
    	cp $(PROTO_GEN_HEADER_SRCS) $(DISTRIBUTE_DIR)/include/caffe/proto
    	# add tool and example binaries
    	cp $(TOOL_BINS) $(DISTRIBUTE_DIR)/bin
    	cp $(EXAMPLE_BINS) $(DISTRIBUTE_DIR)/bin
    	# add libraries
    	cp $(STATIC_NAME) $(DISTRIBUTE_DIR)/lib
    	install -m 644 $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib
    	cd $(DISTRIBUTE_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT);   ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
    	# add python - it's not the standard way, indeed...
    	cp -r python $(DISTRIBUTE_DIR)/python
    
    -include $(DEPS)
    

    如果报错需要依赖,就需要安装和相应的依赖.
    然后执行命令 make pycaffe
    之后执行/data/scripts/fetch_faster_rcnn_models.sh 命令 下载模型文件,也可以用浏览器下载之后解压放在data文件夹里
    最后执行命令 python2 ./demp.py –cpu
    出错 就需要修改
    ../lib/fast_rcnn/nms_wrapper.py:9:#from nms.gpu_nms import gpu_nms
    将gpu注释 仅使用cpu版本
    强制使用cpu模式
    def nms(dets, thresh, force_cpu=True):

    报错
    ImportError: No module named yaml
    使用命令
    python -m pip install pyyaml
    安装yaml

    在这里参考了 别人的博客 https://www.cnblogs.com/justinzhang/p/5386837.html

    检测结果如下:

  • 相关阅读:
    delphi7 projectoptions打开出错
    file not found frmaddBdsuo.dcu
    session 不活动是因为未注册 WebSessionActivator,或试图在 IHttpHandler 的构造函数中 访问session
    oracle导出指定的表,并将指定的表追加到其他dmp文件中(不影响dmp中其他的表)
    父类指针转换成子类指针
    为什么会是这样的输出结果
    字符数组
    变量的声明和定义
    const对象默认为文件的局部变量
    输出结果
  • 原文地址:https://www.cnblogs.com/superfly123/p/11544594.html
Copyright © 2011-2022 走看看