小编给大家分享一下nova怎么创建虚拟机,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
总体描述:
1. 创建实例接口
还是先看接口API
REQ: curl \ -i 'http://ubuntu80:8774/v2/0e962df9db3f4469b3d9bfbc5ffdaf7e/servers' \ -X POST -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "User-Agent: python-novaclient" \ -H "X-Auth-Project-Id: admin" \ -H "X-Auth-Token: {SHA1}e87219521f61238b143fbb323b962930380ce022" \ -d '{"server": {"name": "ubuntu_test", "imageRef": "cde1d850-65bb-48f6-8ee9-b990c7ccf158", "flavorRef": "2", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cfa25cef-96c3-46f1-8522-d9518eb5a451"}]}}'
这里对应的依然是Controller
具体位置为:
nova.api.openstack.compute.servers.Controller.create
注意这个方法的装饰器有一个@wsgi.response(202),而根据HTTP协议状态码返回202表示服务器已接受请求,但尚未处理,表明这是一个异步任务。
最终此方法调用的self.compute_api.create(...)是由__init__(...)中的self.compute_api = compute.API()获取。
因此compute.API()对应到nova.compute.api.API.create(...),其内部又调用nova.compute.api.API._create_instance(...) 。
就在nova.compute.api.API._create_instance(...)里面重点来了。
2. 任务状态第一次变化为SCHEDULING
在nova.compute.api.API._create_instance(...)里有一步调用:
instances = self._provision_instances(context, instance_type, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota)
这一方法的位置是nova.compute.api.API._provision_instances,其内部有如下调用:
instance = self.create_db_entry_for_new_instance(...)
而在nova.compute.api.API.create_db_entry_for_new_instance对应(self.create_db_entry_for_new_instance(...)),有如下调用:
self._populate_instance_for_create(context, instance, image, index, security_group, instance_type)
其对应的是nova.compute.api.API._populate_instance_for_create,内部第一次将任务状态置为调度:
instance.vm_state = vm_states.BUILDING instance.task_state = task_states.SCHEDULING
那么回到_provision_instances方法,主要是申请了配额。
3. 由nova-api到nova-conductor
在nova.compute.api.API._create_instance(...)里有一步调用:
self.compute_task_api.build_instances(context, instances=instances, image=boot_meta, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, legacy_bdm=False)
从这一步开始离开nova-api,nova-api调用的是nova-conductor,nova-scheduler和nova-compute中的方法。
@property def compute_task_api(self): if self._compute_task_api is None: # TODO(alaski): Remove calls into here from conductor manager so # that this isn't necessary. #1180540 from nova import conductor self._compute_task_api = conductor.ComputeTaskAPI() return self._compute_task_api
4. nova-conductor调用nova-scheduler和nova-compute
这边已经来到conductor部分,位置为nova.conductor.ComputeTaskAPI:
def ComputeTaskAPI(*args, **kwargs): use_local = kwargs.pop('use_local', False) if oslo.config.cfg.CONF.conductor.use_local or use_local: api = conductor_api.LocalComputeTaskAPI else: api = conductor_api.ComputeTaskAPI return api(*args, **kwargs)
这里use_local是默认置为False,这里默认调用的是
api = conductor_api.LocalComputeTaskAPI
它的位置是nova.conductor.LocalComputeTaskAPI在它的构造函数(__init__(...))中有manager.ComputeTaskManager即nova.conductor.ComputeTaskManager
这个类的build_instances方法,位置(nova.conductor.ComputeTaskManager.build_instances(...)):
nova-conductor会在build_instances()中生成request_spec字典,
request_spec = scheduler_utils.build_request_spec(...)
其中包括了详细的虚拟机信息,nova-scheduler依据这些信息为虚拟机选择一个最佳的主机,
hosts = self.scheduler_client.select_destiation(... , request_spec, ...)
然后nova-conductor再通过RPC调用nova-compute创建虚拟机
self.compute_rpcapi.build_and_run_instance(context, instance=instance, host=host['host'], image=image, request_spec=request_spec, filter_properties=local_filter_props, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=bdms, node=host['nodename'], limits=host['limits'])
这里调用的是nova.compute.rpcapi.ComputeAPI.build_and_run_instance
其中可以看到,调用的是'build_and_run_instance',而cctxt.cast(...)是异步远程调用(调用后不立即返回),详细可以搜索oslo.messaging模块的使用
cctxt.cast(ctxt, 'build_and_run_instance', instance=instance, image=image, request_spec=request_spec, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, node=node, limits=limits)
调用对应的是nova.compute.manager.build_and_run_instance(...),其内部调用(注意:使用的是spawn方式调用)的是_do_build_and_run_instance(...)。
_do_build_and_run_instance(...)内部主要的调用为_build_and_run_instance函数(nova.compute.manager._build_and_run_instance(...)):
5. 建立和运行实例
定位到nova.compute.manager._build_and_run_instance(...)之后,看到如下代码:
def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties): image_name = image.get('name') self._notify_about_instance_usage(context, instance, 'create.start', extra_usage_info={'image_name': image_name}) try: # 资源跟踪器 rt = self._get_resource_tracker(node) with rt.instance_claim(context, instance, limits) as inst_claim: # NOTE(russellb) It's important that this validation be done # *after* the resource tracker instance claim, as that is where # the host is set on the instance. self._validate_instance_group_policy(context, instance, filter_properties) # 分配资源,包括网络和存储,在内部 # 任务状态由task_states.SPAWNING变为task_states.NETWORKING再 # 变成task_states.BLOCK_DEVICE_MAPPING with self._build_resources(context, instance, requested_networks, security_groups, image, block_device_mapping) as resources: instance.vm_state = vm_states.BUILDING # 任务状态变为孵化中 instance.task_state = task_states.SPAWNING instance.numa_topology = inst_claim.claimed_numa_topology instance.save(expected_task_state= task_states.BLOCK_DEVICE_MAPPING) block_device_info = resources['block_device_info'] network_info = resources['network_info'] # 调用底层virt api孵化实例 self.driver.spawn(context, instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info) except ...: # NOTE(alaski): This is only useful during reschedules, remove it now. instance.system_metadata.pop('network_allocated', None) # 查看实例电源状态 instance.power_state = self._get_power_state(context, instance) # 开启实例电源(开机) instance.vm_state = vm_states.ACTIVE # 任务状态清空 instance.task_state = None # 实例内部时间操作 instance.launched_at = timeutils.utcnow() try: instance.save(expected_task_state=task_states.SPAWNING) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as e: with excutils.save_and_reraise_exception(): self._notify_about_instance_usage(context, instance, 'create.end', fault=e) # 通知创建过程结束 self._notify_about_instance_usage(context, instance, 'create.end', extra_usage_info={'message': _('Success')}, network_info=network_info)
第一步就是建立一个资源跟踪器(RT: Resource Tracker),注意,RT分为索要跟踪器(Claim RT)和周期跟踪器(Periodic RT),当然我们还可以自己扩展插件叫扩展跟踪器(Extensible RT)。
顾名思义,在_build_and_run_instance函数中建立的RT是索要(claim)跟踪器,对计算节点上的资源进行核实,如果资源分配失败,会抛出异常。
rt = self._get_resource_tracker(node) with rt.instance_claim(context, instance, limits) as inst_claim:
这里还要提到_build_resources函数,在这个函数里instance.task_state状态
由task_states.SCHEDULING变为task_states.NETWORKING再变成task_states.BLOCK_DEVICE_MAPPING
self._build_resources(context, instance, requested_networks, security_groups, image, block_device_mapping)
待资源分配完成,任务状态由task_states.BLOCK_DEVICE_MAPPING转变为task_states.SPAWNING
instance.task_state = task_states.SPAWNING
当一切准备完毕时,调用self.driver.spawn孵化实例,底层则是Libvirt部分,启动孵化过程
self.driver.spawn(context, instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info)
之后做的就是开机,对时,通知创建过程结束,成功!
以上是“nova怎么创建虚拟机”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。