zoukankan      html  css  js  c++  java
  • python selenium T2

    python selenium T2

    自动化测试模型

      模块驱动测试、数据驱动测试、对象驱动测试

      数据驱动减肥:数据的改变(更新)驱动自动化执行,从而引起测试结果的改变

      关键字驱动:

        QTP , Rebot framework 等是以关键字驱动为主的自动化工具

     

    目前自动化测试定位在冒烟测试和回归测试

      

    EG1 :

    import time
    from datetime import datetime
    import unittest
    
    import os
    from selenium import webdriver
    from selenium.webdriver.common.keys import Keys
    
    class WebTestCase(unittest.TestCase):
    
        def setUp(self):
            self.testpng = os.path.join(os.getcwd(), 'testpng')
            if not os.path.exists(self.testpng):
                os.makedirs(self.testpng)
    
            self.driver = webdriver.Chrome()
            self.base_url = 'http://baidu.com'
    
        def testsearch(self):
            self.driver.get(self.base_url)
            ele = self.driver.find_element_by_name('wd')
            ele.send_keys('unittest')
            time.sleep(2)
            fpng = os.path.join(self.testpng ,'test_%s.png'%(datetime.now().strftime('%Y-%m-%d_%H-%M-%s') ,))
            self.driver.get_screenshot_as_file(fpng)
    
        def tearDown(self):
            self.driver.quit()
    
    
    def suite():
        suite = unittest.TestSuite()
        suite.addTest(WebTestCase('testsearch'))
        return suite
    
    def suite2():
        return unittest.makeSuite(WebTestCase,'test')
    
    if __name__ == '__main__':
        suite = suite2()
        runner = unittest.TextTestRunner()
        runner.run(suite)
    

      

    EG2:

    test_case.py
    
    import os
    
    case_d = os.path.join(os.getcwd() , 'test_case')
    caselist = os.listdir(case_d)
    
    
    for case in caselist:
        if case.endswith('.py') and case != '__init__.py':
            print(case)
            os.system('python3 %s 1>>log.txt 2>&1'%(os.path.join(case_d , case)))
    

      

    test_case/baidu.py
    
    
    import unittest, time, re
    
    class Baidu(unittest.TestCase):
        def setUp(self):
            self.driver = webdriver.Chrome()
            self.driver.implicitly_wait(30)
            self.base_url = 'http://baidu.com'
            self.verifications_errors = []
            self.accept_next_alert = True
    
        def test_baidu_search(self):
            driver = self.driver
            driver.get(self.base_url + '/')
            driver.find_element_by_name('wd').send_keys('unittest')
            driver.find_element_by_id('su').click()
            time.sleep(2)
            driver.close()
    
        def tearDown(self):
            self.driver.quit()
            self.assertEqual([], self.verifications_errors)
    
    if __name__ == '__main__':
        unittest.main()
    

      

    python3中用HTMLTestRunner.py报ImportError: No module named 'StringIO'如何解决

      http://www.cnblogs.com/testyao/p/5658200.html

    EG4 :

    from HTMLTestRunner import HTMLTestRunner
    
    class Baidu(unittest.TestCase):
        def setUp(self):
            self.driver = webdriver.Chrome()
            self.driver.implicitly_wait(30)
            self.base_url = 'http://baidu.com'
            self.verifications_errors = []
            self.accept_next_alert = True
    
        def test_baidu_search(self):
            driver = self.driver
            driver.get(self.base_url + '/')
            driver.find_element_by_name('wd').send_keys('unittest')
            driver.find_element_by_id('su').click()
            time.sleep(2)
            driver.close()
    
        def tearDown(self):
            self.driver.quit()
            self.assertEqual([], self.verifications_errors)
    
    if __name__ == '__main__':
        suite = unittest.TestSuite()
    
        suite.addTest(Baidu('test_baidu_search'))
        filename = 'report.html'
        with open(filename, 'wb') as fp:
            runner = HTMLTestRunner(
                stream=fp,
                title='baidu search test',
                description='run report')
            runner.run(suite)

    Result:

  • 相关阅读:
    抽象类于接口
    继承
    分布式爬虫基于scrapy
    nginx wsgi django 建站配置最终版
    scrapy crawlspider内置方法源码
    redis数据的安装以及基本使用方法
    CrawlSpider 用法(页面链接提取解析 例如:下一页)
    请求传参
    日志等级
    代理操作
  • 原文地址:https://www.cnblogs.com/zsr0401/p/6484757.html
Copyright © 2011-2022 走看看