儲存庫 vbox 的更動 76678
- 時間撮記:
- 2019-1-7 下午01:48:16 (6 年 以前)
- svn:sync-xref-src-repo-rev:
- 127984
- 位置:
- trunk
- 檔案:
-
- 修改 29 筆資料
圖例:
- 未更動
- 新增
- 刪除
-
trunk
- 屬性 svn:mergeinfo
-
old new 8 8 /branches/VBox-5.0:104445,104938,104943,104950,104952-104953,104987-104988,104990,106453 9 9 /branches/VBox-5.1:112367,115992,116543,116550,116568,116573 10 /branches/VBox-5.2:119536,120083,120099,120213,120221,120239,123597-123598,123600-123601,123755,12 5768,125779-125780,12581210 /branches/VBox-5.2:119536,120083,120099,120213,120221,120239,123597-123598,123600-123601,123755,124260,124263,124271,124273,124277-124279,124284-124286,124288-124290,125768,125779-125780,125812 11 11 /branches/andy/draganddrop:90781-91268 12 12 /branches/andy/guestctrl20:78916,78930
-
- 屬性 svn:mergeinfo
-
trunk/doc/manual/en_US/user_BasicConcepts.xml
r76079 r76678 1490 1490 paging in addition to hardware virtualization. For 1491 1491 technical details, see <xref linkend="nestedpaging" />. 1492 For Intel EPT security recommendations, see 1493 <xref linkend="sec-rec-cve-2018-3646" />. 1492 1494 </para> 1493 1495 </listitem> -
trunk/doc/manual/en_US/user_Security.xml
r76078 r76678 515 515 --> 516 516 517 <sect1 id="security-recommendations"> 518 <title>Security Recommendations</title> 519 520 <para>This section contains security recommendations for specific issues. 521 By default VirtualBox will configure the VMs to run in a secure manner, 522 however this may not always be possible without additional user actions (e.g. 523 host OS / firmware configuration changes).</para> 524 525 <sect2 id="sec-rec-cve-2018-3646"> 526 <title>CVE-2018-3646</title> 527 528 <para>This security issue affect a range of Intel CPUs with nested paging. 529 AMD CPUs are expected not to be impacted (pending direct confirmation by AMD). 530 Also the issue does not affect VMs running with hardware virtualization 531 disabled or with nested paging disabled.</para> 532 533 <para>For more information about nested paging, see <xref linkend="nestedpaging" />.</para> 534 535 <para>Mitigation options:</para> 536 537 <sect3> 538 <title>Disable nested paging</title> 539 540 <para>By disabling nested paging (EPT), the VMM will construct page tables 541 shadowing the ones in the guest. It is no possible for the guest to insert 542 anything fishy into the page tables, since the VMM carefully validates each 543 entry before shadowing it.</para> 544 545 <para>As a side effect of disabling nested paging, several CPU features 546 will not be made available to the guest. Among these features are AVX, 547 AVX2, XSAVE, AESNI, and POPCNT. Not all guests may be able to cope with 548 dropping these features after installation. Also, for some guests, 549 especially in SMP configurations, there could be stability issues arrising 550 from disabling nested paging. Finally, some workloads may experience a 551 performance degradation.</para> 552 </sect3> 553 554 <sect3> 555 <title>Flushing the level 1 data cache</title> 556 557 <para>This aims at removing potentially sensitive data from the level 1 558 data cache when running guest code. However, it is made difficult by 559 hyper-threading setups sharing the level 1 cache and thereby potentially 560 letting the other thread in a pair refill the cache with data the user 561 does not want the guest to see. In addition, flushing the level 1 data 562 cache is usually not without performance side effects.</para> 563 564 <para>Up to date CPU microcode is a prerequisite for the cache flushing 565 mitigations. Some host OSes may install these automatically, though it 566 has traditionally been a task best performed by the system firmware. So, 567 please check with your system / mainboard manufacturer for the latest 568 firmware update.</para> 569 570 <para>We recommend disabling hyper threading on the host. This is 571 traditionally done from the firmware setup, but some OSes also offers 572 ways disable HT. In some cases it may be disabled by default, but please 573 verify as the effectiveness of the mitigation depends on it.</para> 574 575 <para>The default action taken by VirtualBox is to flush the level 1 576 data cache when a thread is scheduled to execute guest code, rather 577 than on each VM entry. This reduces the performance impact, while 578 making the assumption that the host OS will not handle security 579 sensitive data from interrupt handlers and similar without taking 580 precautions.</para> 581 582 <para>A more aggressive flushing option is provided via the VBoxManage 583 modifyvm option <computeroutput>--l1d-flush-on-vm-entry</computeroutput>. 584 When enabled the level 1 data cache will be flushed on every VM entry. 585 The performance impact is greater than with the default option, though 586 this of course depends on the workload. Workloads producing a lot of 587 VM exits (like networking, VGA access, and similiar) will probably be 588 most impacted.</para> 589 590 <para>For users not concerned by this security issue, the default 591 mitigation can be disabled using</para> 592 <para><computeroutput>VBoxManage modifyvm name --l1d-flush-on-sched off</computeroutput></para> 593 </sect3> 594 595 </sect2> 596 597 </sect1> 598 517 599 </chapter> -
trunk/doc/manual/en_US/user_Technical.xml
r76080 r76678 1332 1332 command. See <xref linkend="vboxmanage-modifyvm" />. 1333 1333 </para> 1334 1335 <para> 1336 If you have an Intel CPU with EPT, please consult 1337 <xref linkend="sec-rec-cve-2018-3646" /> for security concerns 1338 regarding EPT. 1339 </para> 1334 1340 </listitem> 1335 1341 -
trunk/doc/manual/en_US/user_VBoxManage.xml
r76278 r76678 1004 1004 the processor of your host system. See 1005 1005 <xref 1006 linkend="hwvirt" /> .1006 linkend="hwvirt" /> and <xref linkend="sec-rec-cve-2018-3646" />. 1007 1007 </para> 1008 1008 </listitem> -
trunk/include/VBox/settings.h
r76585 r76678 1032 1032 bool fSpecCtrl; //< added out of cycle, after 1.16 was out. 1033 1033 bool fSpecCtrlByHost; //< added out of cycle, after 1.16 was out. 1034 bool fL1DFlushOnSched ; //< added out of cycle, after 1.16 was out. 1035 bool fL1DFlushOnVMEntry ; //< added out of cycle, after 1.16 was out. 1034 1036 bool fNestedHWVirt; //< requires settings version 1.17 (VirtualBox 6.0) 1035 1037 typedef enum LongModeType { LongMode_Enabled, LongMode_Disabled, LongMode_Legacy } LongModeType; -
trunk/include/VBox/vmm/cpum.h
r76585 r76678 733 733 kCpumMsrWrFn_Ia32SpecCtrl, 734 734 kCpumMsrWrFn_Ia32PredCmd, 735 kCpumMsrWrFn_Ia32FlushCmd, 735 736 736 737 kCpumMsrWrFn_Amd64Efer, … … 1061 1062 /** Supports IA32_SPEC_CTRL.STIBP. */ 1062 1063 uint32_t fStibp : 1; 1064 /** Supports IA32_FLUSH_CMD. */ 1065 uint32_t fFlushCmd : 1; 1063 1066 /** Supports IA32_ARCH_CAP. */ 1064 1067 uint32_t fArchCap : 1; … … 1101 1104 uint32_t fVmx : 1; 1102 1105 1103 /** Indicates that speculative execution control CPUID bits and 1104 * MSRs are exposed. The details are different for Intel and1105 * AMD but both have similarfunctionality. */1106 /** Indicates that speculative execution control CPUID bits and MSRs are exposed. 1107 * The details are different for Intel and AMD but both have similar 1108 * functionality. */ 1106 1109 uint32_t fSpeculationControl : 1; 1107 1110 1111 /** MSR_IA32_ARCH_CAPABILITIES: RDCL_NO (bit 0). 1112 * @remarks Only safe use after CPUM ring-0 init! */ 1113 uint32_t fArchRdclNo : 1; 1114 /** MSR_IA32_ARCH_CAPABILITIES: IBRS_ALL (bit 1). 1115 * @remarks Only safe use after CPUM ring-0 init! */ 1116 uint32_t fArchIbrsAll : 1; 1117 /** MSR_IA32_ARCH_CAPABILITIES: RSB Override (bit 2). 1118 * @remarks Only safe use after CPUM ring-0 init! */ 1119 uint32_t fArchRsbOverride : 1; 1120 /** MSR_IA32_ARCH_CAPABILITIES: RSB Override (bit 3). 1121 * @remarks Only safe use after CPUM ring-0 init! */ 1122 uint32_t fArchVmmNeedNotFlushL1d : 1; 1123 1108 1124 /** Alignment padding / reserved for future use. */ 1109 uint32_t fPadding : 1 5;1125 uint32_t fPadding : 10; 1110 1126 1111 1127 /** SVM: Supports Nested-paging. */ -
trunk/include/VBox/vmm/cpum.mac
r76553 r76678 290 290 %define CPUMCTX_WSF_IBPB_EXIT RT_BIT_32(0) 291 291 %define CPUMCTX_WSF_IBPB_ENTRY RT_BIT_32(1) 292 %define CPUMCTX_WSF_L1D_ENTRY RT_BIT_32(2) 293 292 294 293 295 %define CPUMSELREG_FLAGS_VALID 0x0001 -
trunk/include/VBox/vmm/cpumctx.h
r76585 r76678 943 943 /** Touch IA32_PRED_CMD.IBPB on VM entry. */ 944 944 #define CPUMCTX_WSF_IBPB_ENTRY RT_BIT_32(1) 945 /** Touch IA32_FLUSH_CMD.L1D on VM entry. */ 946 #define CPUMCTX_WSF_L1D_ENTRY RT_BIT_32(2) 945 947 /** @} */ 946 948 -
trunk/include/iprt/x86.h
r76585 r76678 619 619 /** EDX Bit 27 - IBRS & IBPB - Supports the STIBP flag in IA32_SPEC_CTRL. */ 620 620 #define X86_CPUID_STEXT_FEATURE_EDX_STIBP RT_BIT_32(27) 621 621 /** EDX Bit 28 - FLUSH_CMD - Supports IA32_FLUSH_CMD MSR. */ 622 #define X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD RT_BIT_32(28) 622 623 /** EDX Bit 29 - ARCHCAP - Supports the IA32_ARCH_CAPABILITIES MSR. */ 623 624 #define X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP RT_BIT_32(29) … … 1242 1243 #define MSR_IA32_MTRR_CAP 0xFE 1243 1244 1244 /** Architecture capabilities (bugfixes). 1245 * @note May move */ 1245 /** Architecture capabilities (bugfixes). */ 1246 1246 #define MSR_IA32_ARCH_CAPABILITIES UINT32_C(0x10a) 1247 /** CPU is no subject to spectreproblems. */1248 #define MSR_IA32_ARCH_CAP_F_ SPECTRE_FIXRT_BIT_32(0)1247 /** CPU is no subject to meltdown problems. */ 1248 #define MSR_IA32_ARCH_CAP_F_RDCL_NO RT_BIT_32(0) 1249 1249 /** CPU has better IBRS and you can leave it on all the time. */ 1250 #define MSR_IA32_ARCH_CAP_F_BETTER_IBRS RT_BIT_32(1) 1250 #define MSR_IA32_ARCH_CAP_F_IBRS_ALL RT_BIT_32(1) 1251 /** CPU has return stack buffer (RSB) override. */ 1252 #define MSR_IA32_ARCH_CAP_F_RSBO RT_BIT_32(2) 1253 /** Virtual machine monitors need not flush the level 1 data cache on VM entry. 1254 * This is also the case when MSR_IA32_ARCH_CAP_F_RDCL_NO is set. */ 1255 #define MSR_IA32_ARCH_CAP_F_VMM_NEED_NOT_FLUSH_L1D RT_BIT_32(3) 1256 1257 /** Flush command register. */ 1258 #define MSR_IA32_FLUSH_CMD UINT32_C(0x10b) 1259 /** Flush the level 1 data cache when this bit is written. */ 1260 #define MSR_IA32_FLUSH_CMD_F_L1D RT_BIT_32(0) 1251 1261 1252 1262 /** Cache control/info. */ -
trunk/include/iprt/x86.mac
r76557 r76678 185 185 %define X86_CPUID_STEXT_FEATURE_EDX_IBRS_IBPB RT_BIT_32(26) 186 186 %define X86_CPUID_STEXT_FEATURE_EDX_STIBP RT_BIT_32(27) 187 %define X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD RT_BIT_32(28) 187 188 %define X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP RT_BIT_32(29) 188 189 %define X86_CPUID_EXT_FEATURE_ECX_LAHF_SAHF RT_BIT_32(0) … … 432 433 %define MSR_IA32_MTRR_CAP 0xFE 433 434 %define MSR_IA32_ARCH_CAPABILITIES 0x10a 434 %define MSR_IA32_ARCH_CAP_F_SPECTRE_FIX RT_BIT_32(0) 435 %define MSR_IA32_ARCH_CAP_F_BETTER_IBRS RT_BIT_32(1) 435 %define MSR_IA32_ARCH_CAP_F_RDCL_NO RT_BIT_32(0) 436 %define MSR_IA32_ARCH_CAP_F_IBRS_ALL RT_BIT_32(1) 437 %define MSR_IA32_ARCH_CAP_F_RSBO RT_BIT_32(2) 438 %define MSR_IA32_ARCH_CAP_F_VMM_NEED_NOT_FLUSH_L1D RT_BIT_32(3) 439 %define MSR_IA32_FLUSH_CMD 0x10b 440 %define MSR_IA32_FLUSH_CMD_F_L1D RT_BIT_32(0) 436 441 %define MSR_BBL_CR_CTL3 0x11e 437 442 %ifndef MSR_IA32_SYSENTER_CS -
trunk/src/VBox
- 屬性 svn:mergeinfo
-
old new 8 8 /branches/VBox-5.0/src/VBox:104938,104943,104950,104987-104988,104990,106453 9 9 /branches/VBox-5.1/src/VBox:112367,116543,116550,116568,116573 10 /branches/VBox-5.2/src/VBox:119536,120083,120099,120213,120221,120239,123597-123598,123600-123601,123755,12 5768,125779-125780,12581210 /branches/VBox-5.2/src/VBox:119536,120083,120099,120213,120221,120239,123597-123598,123600-123601,123755,124263,124273,124277-124279,124284-124286,124288-124290,125768,125779-125780,125812 11 11 /branches/andy/draganddrop/src/VBox:90781-91268 12 12 /branches/andy/guestctrl20/src/VBox:78916,78930
-
- 屬性 svn:mergeinfo
-
trunk/src/VBox/Frontends
- 屬性 svn:mergeinfo
-
old new 7 7 /branches/VBox-4.3/src/VBox/Frontends:91223 8 8 /branches/VBox-4.3/trunk/src/VBox/Frontends:91223 9 /branches/VBox-5.2/src/VBox/Frontends:120213 9 /branches/VBox-5.2/src/VBox/Frontends:120213,124288 10 10 /branches/andy/draganddrop/src/VBox/Frontends:90781-91268 11 11 /branches/andy/guestctrl20/src/VBox/Frontends:78916,78930
-
- 屬性 svn:mergeinfo
-
trunk/src/VBox/Frontends/VBoxManage/VBoxManageHelp.cpp
r76553 r76678 517 517 " [--ibpb-on-vm-entry on|off]\n" 518 518 " [--spec-ctrl on|off]\n" 519 " [--l1d-flush-on-sched on|off]\n" 520 " [--l1d-flush-on-vm-entry on|off]\n" 519 521 " [--nested-hw-virt on|off]\n" 520 522 " [--cpu-profile \"host|Intel 80[86|286|386]\"]\n" -
trunk/src/VBox/Frontends/VBoxManage/VBoxManageModifyVM.cpp
r76553 r76678 78 78 MODIFYVM_IBPB_ON_VM_ENTRY, 79 79 MODIFYVM_SPEC_CTRL, 80 MODIFYVM_L1D_FLUSH_ON_SCHED, 81 MODIFYVM_L1D_FLUSH_ON_VM_ENTRY, 80 82 MODIFYVM_NESTED_HW_VIRT, 81 83 MODIFYVM_CPUS, … … 264 266 { "--ibpb-on-vm-entry", MODIFYVM_IBPB_ON_VM_ENTRY, RTGETOPT_REQ_BOOL_ONOFF }, 265 267 { "--spec-ctrl", MODIFYVM_SPEC_CTRL, RTGETOPT_REQ_BOOL_ONOFF }, 268 { "--l1d-flush-on-sched", MODIFYVM_L1D_FLUSH_ON_SCHED, RTGETOPT_REQ_BOOL_ONOFF }, 269 { "--l1d-flush-on-vm-entry", MODIFYVM_L1D_FLUSH_ON_VM_ENTRY, RTGETOPT_REQ_BOOL_ONOFF }, 266 270 { "--nested-hw-virt", MODIFYVM_NESTED_HW_VIRT, RTGETOPT_REQ_BOOL_ONOFF }, 267 271 { "--cpuid-set", MODIFYVM_SETCPUID, RTGETOPT_REQ_UINT32_OPTIONAL_PAIR | RTGETOPT_FLAG_HEX }, … … 798 802 break; 799 803 804 case MODIFYVM_L1D_FLUSH_ON_SCHED: 805 CHECK_ERROR(sessionMachine, SetCPUProperty(CPUPropertyType_L1DFlushOnEMTScheduling, ValueUnion.f)); 806 break; 807 808 case MODIFYVM_L1D_FLUSH_ON_VM_ENTRY: 809 CHECK_ERROR(sessionMachine, SetCPUProperty(CPUPropertyType_L1DFlushOnVMEntry, ValueUnion.f)); 810 break; 811 800 812 case MODIFYVM_NESTED_HW_VIRT: 801 813 CHECK_ERROR(sessionMachine, SetCPUProperty(CPUPropertyType_HWVirt, ValueUnion.f)); -
trunk/src/VBox/Main/idl/VirtualBox.xidl
r76298 r76678 1055 1055 If set, the speculation controls are managed by the host. This is intended 1056 1056 for guests which do not set the speculation controls themselves. 1057 Note! This has not yet been implemented beyond leaving everything to the host OS. 1058 </desc> 1059 </const> 1060 <const name="L1DFlushOnEMTScheduling" value="11"> 1061 <desc> 1062 If set and the host is affected by CVE-2018-3646, flushes the level 1 data 1063 cache when the EMT is scheduled to do ring-0 guest execution. There could 1064 be a small performance penalty for certain typs of workloads. 1065 For security reasons this setting will be enabled by default. 1066 </desc> 1067 </const> 1068 <const name="L1DFlushOnVMEntry" value="12"> 1069 <desc> 1070 If set and the host is affected by CVE-2018-3646, flushes the level 1 data 1071 on every VM entry. This setting may significantly slow down workloads 1072 causing many VM exits, so it is only recommended for situation where there 1073 is a real need to be paranoid. 1057 1074 </desc> 1058 1075 </const> -
trunk/src/VBox/Main/include/MachineImpl.h
r76562 r76678 288 288 BOOL mSpecCtrl; 289 289 BOOL mSpecCtrlByHost; 290 BOOL mL1DFlushOnSched; 291 BOOL mL1DFlushOnVMEntry; 290 292 BOOL mNestedHWVirt; 291 293 ULONG mCPUCount; -
trunk/src/VBox/Main/src-client/ConsoleImpl2.cpp
r76553 r76678 1183 1183 hrc = pMachine->GetCPUProperty(CPUPropertyType_SpecCtrlByHost, &fSpecCtrlByHost); H(); 1184 1184 InsertConfigInteger(pHM, "SpecCtrlByHost", fSpecCtrlByHost); 1185 1186 BOOL fL1DFlushOnSched = true; 1187 hrc = pMachine->GetCPUProperty(CPUPropertyType_L1DFlushOnEMTScheduling, &fL1DFlushOnSched); H(); 1188 InsertConfigInteger(pHM, "L1DFlushOnSched", fL1DFlushOnSched); 1189 1190 BOOL fL1DFlushOnVMEntry = false; 1191 hrc = pMachine->GetCPUProperty(CPUPropertyType_L1DFlushOnVMEntry, &fL1DFlushOnVMEntry); H(); 1192 InsertConfigInteger(pHM, "L1DFlushOnVMEntry", fL1DFlushOnVMEntry); 1185 1193 1186 1194 /* Reset overwrite. */ -
trunk/src/VBox/Main/src-server/MachineImpl.cpp
r76592 r76678 196 196 mSpecCtrl = false; 197 197 mSpecCtrlByHost = false; 198 mL1DFlushOnSched = true; 199 mL1DFlushOnVMEntry = false; 198 200 mNestedHWVirt = false; 199 201 mHPETEnabled = false; … … 2025 2027 break; 2026 2028 2029 case CPUPropertyType_L1DFlushOnEMTScheduling: 2030 *aValue = mHWData->mL1DFlushOnSched; 2031 break; 2032 2033 case CPUPropertyType_L1DFlushOnVMEntry: 2034 *aValue = mHWData->mL1DFlushOnVMEntry; 2035 break; 2036 2027 2037 default: 2028 2038 return E_INVALIDARG; … … 2102 2112 mHWData.backup(); 2103 2113 mHWData->mNestedHWVirt = !!aValue; 2114 break; 2115 2116 case CPUPropertyType_L1DFlushOnEMTScheduling: 2117 i_setModified(IsModified_MachineData); 2118 mHWData.backup(); 2119 mHWData->mL1DFlushOnSched = !!aValue; 2120 break; 2121 2122 case CPUPropertyType_L1DFlushOnVMEntry: 2123 i_setModified(IsModified_MachineData); 2124 mHWData.backup(); 2125 mHWData->mL1DFlushOnVMEntry = !!aValue; 2104 2126 break; 2105 2127 … … 8836 8858 mHWData->mSpecCtrl = data.fSpecCtrl; 8837 8859 mHWData->mSpecCtrlByHost = data.fSpecCtrlByHost; 8860 mHWData->mL1DFlushOnSched = data.fL1DFlushOnSched; 8861 mHWData->mL1DFlushOnVMEntry = data.fL1DFlushOnVMEntry; 8838 8862 mHWData->mNestedHWVirt = data.fNestedHWVirt; 8839 8863 mHWData->mCPUCount = data.cCPUs; … … 10157 10181 data.fSpecCtrl = !!mHWData->mSpecCtrl; 10158 10182 data.fSpecCtrlByHost = !!mHWData->mSpecCtrlByHost; 10183 data.fL1DFlushOnSched = !!mHWData->mL1DFlushOnSched; 10184 data.fL1DFlushOnVMEntry = !!mHWData->mL1DFlushOnVMEntry; 10159 10185 data.fNestedHWVirt = !!mHWData->mNestedHWVirt; 10160 10186 data.cCPUs = mHWData->mCPUCount; -
trunk/src/VBox/Main/xml/Settings.cpp
r76598 r76678 3068 3068 fSpecCtrl(false), 3069 3069 fSpecCtrlByHost(false), 3070 fL1DFlushOnSched(true), 3071 fL1DFlushOnVMEntry(false), 3070 3072 fNestedHWVirt(false), 3071 3073 enmLongMode(HC_ARCH_BITS == 64 ? Hardware::LongMode_Enabled : Hardware::LongMode_Disabled), … … 3201 3203 && fSpecCtrl == h.fSpecCtrl 3202 3204 && fSpecCtrlByHost == h.fSpecCtrlByHost 3205 && fL1DFlushOnSched == h.fL1DFlushOnSched 3206 && fL1DFlushOnVMEntry == h.fL1DFlushOnVMEntry 3203 3207 && fNestedHWVirt == h.fNestedHWVirt 3204 3208 && cCPUs == h.cCPUs … … 4221 4225 if (pelmCPUChild) 4222 4226 pelmCPUChild->getAttributeValue("enabled", hw.fSpecCtrlByHost); 4227 pelmCPUChild = pelmHwChild->findChildElement("L1DFlushOn"); 4228 if (pelmCPUChild) 4229 { 4230 pelmCPUChild->getAttributeValue("scheduling", hw.fL1DFlushOnSched); 4231 pelmCPUChild->getAttributeValue("vmentry", hw.fL1DFlushOnVMEntry); 4232 } 4223 4233 pelmCPUChild = pelmHwChild->findChildElement("NestedHWVirt"); 4224 4234 if (pelmCPUChild) … … 5581 5591 pelmChild->setAttribute("vmentry", hw.fIBPBOnVMEntry); 5582 5592 } 5583 } 5584 if (m->sv >= SettingsVersion_v1_16 && hw.fSpecCtrl) 5585 pelmCPU->createChild("SpecCtrl")->setAttribute("enabled", hw.fSpecCtrl); 5586 if (m->sv >= SettingsVersion_v1_16 && hw.fSpecCtrlByHost) 5587 pelmCPU->createChild("SpecCtrlByHost")->setAttribute("enabled", hw.fSpecCtrlByHost); 5593 if (hw.fSpecCtrl) 5594 pelmCPU->createChild("SpecCtrl")->setAttribute("enabled", hw.fSpecCtrl); 5595 if (hw.fSpecCtrlByHost) 5596 pelmCPU->createChild("SpecCtrlByHost")->setAttribute("enabled", hw.fSpecCtrlByHost); 5597 if (!hw.fL1DFlushOnSched || hw.fL1DFlushOnVMEntry) 5598 { 5599 xml::ElementNode *pelmChild = pelmCPU->createChild("L1DFlushOn"); 5600 if (!hw.fL1DFlushOnSched) 5601 pelmChild->setAttribute("scheduling", hw.fL1DFlushOnSched); 5602 if (hw.fL1DFlushOnVMEntry) 5603 pelmChild->setAttribute("vmentry", hw.fL1DFlushOnVMEntry); 5604 } 5605 } 5588 5606 if (m->sv >= SettingsVersion_v1_17 && hw.fNestedHWVirt) 5589 5607 pelmCPU->createChild("NestedHWVirt")->setAttribute("enabled", hw.fNestedHWVirt); … … 7346 7364 || hardwareMachine.fIBPBOnVMEntry 7347 7365 || hardwareMachine.fSpecCtrl 7348 || hardwareMachine.fSpecCtrlByHost) 7366 || hardwareMachine.fSpecCtrlByHost 7367 || !hardwareMachine.fL1DFlushOnSched 7368 || hardwareMachine.fL1DFlushOnVMEntry) 7349 7369 { 7350 7370 m->sv = SettingsVersion_v1_16; -
trunk/src/VBox/VMM/VMMAll/CPUMAllMsrs.cpp
r76553 r76678 1521 1521 1522 1522 1523 1524 1525 1526 1527 1528 1529 1523 /** @callback_method_impl{FNCPUMWRMSR} */ 1524 static DECLCALLBACK(VBOXSTRICTRC) cpumMsrWr_Ia32FlushCmd(PVMCPU pVCpu, uint32_t idMsr, PCCPUMMSRRANGE pRange, uint64_t uValue, uint64_t uRawValue) 1525 { 1526 RT_NOREF_PV(pVCpu); RT_NOREF_PV(idMsr); RT_NOREF_PV(pRange); RT_NOREF_PV(uRawValue); 1527 if ((uValue & ~MSR_IA32_FLUSH_CMD_F_L1D) == 0) 1528 return VINF_SUCCESS; 1529 Log(("CPUM: Invalid MSR_IA32_FLUSH_CMD_ bits (trying to write %#llx)\n", uValue)); 1530 return VERR_CPUM_RAISE_GP_0; 1531 } 1530 1532 1531 1533 … … 5337 5339 cpumMsrWr_Ia32SpecCtrl, 5338 5340 cpumMsrWr_Ia32PredCmd, 5341 cpumMsrWr_Ia32FlushCmd, 5339 5342 5340 5343 cpumMsrWr_Amd64Efer, … … 6043 6046 CPUM_ASSERT_WR_MSR_FN(Ia32SpecCtrl); 6044 6047 CPUM_ASSERT_WR_MSR_FN(Ia32PredCmd); 6048 CPUM_ASSERT_WR_MSR_FN(Ia32FlushCmd); 6045 6049 6046 6050 CPUM_ASSERT_WR_MSR_FN(Amd64Efer); -
trunk/src/VBox/VMM/VMMR0/CPUMR0.cpp
r76553 r76678 214 214 uint32_t u32CpuVersion; 215 215 uint32_t u32Dummy; 216 uint32_t fFeatures; 216 uint32_t fFeatures; /* (Used further down to check for MSRs, so don't clobber.) */ 217 217 ASMCpuId(1, &u32CpuVersion, &u32Dummy, &u32Dummy, &fFeatures); 218 218 uint32_t const u32Family = u32CpuVersion >> 8; … … 264 264 } 265 265 } 266 } 267 268 /* 269 * Copy MSR_IA32_ARCH_CAPABILITIES bits over into the host feature structure. 270 */ 271 pVM->cpum.s.HostFeatures.fArchRdclNo = 0; 272 pVM->cpum.s.HostFeatures.fArchIbrsAll = 0; 273 pVM->cpum.s.HostFeatures.fArchRsbOverride = 0; 274 pVM->cpum.s.HostFeatures.fArchVmmNeedNotFlushL1d = 0; 275 uint32_t const cStdRange = ASMCpuId_EAX(0); 276 if ( ASMIsValidStdRange(cStdRange) 277 && cStdRange >= 7) 278 { 279 uint32_t fEdxFeatures = ASMCpuId_EDX(7); 280 if ( (fEdxFeatures & X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP) 281 && (fFeatures & X86_CPUID_FEATURE_EDX_MSR)) 282 { 283 uint64_t const fArchVal = ASMRdMsr(MSR_IA32_ARCH_CAPABILITIES); 284 pVM->cpum.s.HostFeatures.fArchRdclNo = RT_BOOL(fArchVal & MSR_IA32_ARCH_CAP_F_RDCL_NO); 285 pVM->cpum.s.HostFeatures.fArchIbrsAll = RT_BOOL(fArchVal & MSR_IA32_ARCH_CAP_F_IBRS_ALL); 286 pVM->cpum.s.HostFeatures.fArchRsbOverride = RT_BOOL(fArchVal & MSR_IA32_ARCH_CAP_F_RSBO); 287 pVM->cpum.s.HostFeatures.fArchVmmNeedNotFlushL1d = RT_BOOL(fArchVal & MSR_IA32_ARCH_CAP_F_VMM_NEED_NOT_FLUSH_L1D); 288 } 289 else 290 pVM->cpum.s.HostFeatures.fArchCap = 0; 266 291 } 267 292 -
trunk/src/VBox/VMM/VMMR0/HMR0A.asm
r76553 r76678 252 252 wrmsr 253 253 %%no_indirect_branch_barrier: 254 %endmacro 255 256 ;; 257 ; Creates an indirect branch prediction and L1D barrier on CPUs that need and supports that. 258 ; @clobbers eax, edx, ecx 259 ; @param 1 How to address CPUMCTX. 260 ; @param 2 Which IBPB flag to test for (CPUMCTX_WSF_IBPB_ENTRY or CPUMCTX_WSF_IBPB_EXIT) 261 ; @param 3 Which FLUSH flag to test for (CPUMCTX_WSF_L1D_ENTRY) 262 %macro INDIRECT_BRANCH_PREDICTION_AND_L1_CACHE_BARRIER 3 263 ; Only one test+jmp when disabled CPUs. 264 test byte [%1 + CPUMCTX.fWorldSwitcher], (%2 | %3) 265 jz %%no_barrier_needed 266 267 ; The eax:edx value is the same for both. 268 AssertCompile(MSR_IA32_PRED_CMD_F_IBPB == MSR_IA32_FLUSH_CMD_F_L1D) 269 mov eax, MSR_IA32_PRED_CMD_F_IBPB 270 xor edx, edx 271 272 ; Indirect branch barrier. 273 test byte [%1 + CPUMCTX.fWorldSwitcher], %2 274 jz %%no_indirect_branch_barrier 275 mov ecx, MSR_IA32_PRED_CMD 276 wrmsr 277 %%no_indirect_branch_barrier: 278 279 ; Level 1 data cache flush. 280 test byte [%1 + CPUMCTX.fWorldSwitcher], %3 281 jz %%no_cache_flush_barrier 282 mov ecx, MSR_IA32_FLUSH_CMD 283 wrmsr 284 %%no_cache_flush_barrier: 285 286 %%no_barrier_needed: 254 287 %endmacro 255 288 … … 1454 1487 ; Don't mess with ESP anymore!!! 1455 1488 1456 ; Fight spectre .1457 INDIRECT_BRANCH_PREDICTION_ BARRIER xSI, CPUMCTX_WSF_IBPB_ENTRY1489 ; Fight spectre and similar. 1490 INDIRECT_BRANCH_PREDICTION_AND_L1_CACHE_BARRIER xSI, CPUMCTX_WSF_IBPB_ENTRY, CPUMCTX_WSF_L1D_ENTRY 1458 1491 1459 1492 ; Load guest general purpose registers. … … 1763 1796 ; Don't mess with ESP anymore!!! 1764 1797 1765 ; Fight spectre .1766 INDIRECT_BRANCH_PREDICTION_ BARRIER xSI, CPUMCTX_WSF_IBPB_ENTRY1798 ; Fight spectre and similar. 1799 INDIRECT_BRANCH_PREDICTION_AND_L1_CACHE_BARRIER xSI, CPUMCTX_WSF_IBPB_ENTRY, CPUMCTX_WSF_L1D_ENTRY 1767 1800 1768 1801 ; Load guest general purpose registers. -
trunk/src/VBox/VMM/VMMR0/HMVMXR0.cpp
r76637 r76678 2529 2529 #endif 2530 2530 /* 2531 * The IA32_PRED_CMD MSR is write-only and has no state associated with it. We never need to intercept 2532 * access (writes need to be executed without exiting, reds will #GP-fault anyway). 2531 * The IA32_PRED_CMD and IA32_FLUSH_CMD MSRs are write-only and has no state 2532 * associated with then. We never need to intercept access (writes need to 2533 * be executed without exiting, reads will #GP-fault anyway). 2533 2534 */ 2534 2535 if (pVM->cpum.ro.GuestFeatures.fIbpb) 2535 2536 hmR0VmxSetMsrPermission(pVCpu, MSR_IA32_PRED_CMD, VMXMSREXIT_PASSTHRU_READ, VMXMSREXIT_PASSTHRU_WRITE); 2537 if (pVM->cpum.ro.GuestFeatures.fFlushCmd) 2538 hmR0VmxSetMsrPermission(pVCpu, MSR_IA32_FLUSH_CMD, VMXMSREXIT_PASSTHRU_READ, VMXMSREXIT_PASSTHRU_WRITE); 2536 2539 2537 2540 /* Though MSR_IA32_PERF_GLOBAL_CTRL is saved/restored lazily, we want intercept reads/write to it for now. */ … … 8057 8060 pVCpu->hm.s.fLeaveDone = false; 8058 8061 Log4Func(("Activated Vmcs. HostCpuId=%u\n", RTMpCpuId())); 8062 8063 /* 8064 * Do the EMT scheduled L1D flush here if needed. 8065 */ 8066 if (pVCpu->CTX_SUFF(pVM)->hm.s.fL1dFlushOnSched) 8067 ASMWrMsr(MSR_IA32_FLUSH_CMD, MSR_IA32_FLUSH_CMD_F_L1D); 8059 8068 } 8060 8069 return rc; … … 8135 8144 } 8136 8145 pVCpu->hm.s.fLeaveDone = false; 8146 8147 /* Do the EMT scheduled L1D flush if needed. */ 8148 if (pVCpu->CTX_SUFF(pVM)->hm.s.fL1dFlushOnSched) 8149 ASMWrMsr(MSR_IA32_FLUSH_CMD, MSR_IA32_FLUSH_CMD_F_L1D); 8137 8150 8138 8151 /* Restore longjmp state. */ -
trunk/src/VBox/VMM/VMMR3/CPUMR3CpuId.cpp
r76553 r76678 1870 1870 pFeatures->fIbrs = pFeatures->fIbpb; 1871 1871 pFeatures->fStibp = RT_BOOL(pSxfLeaf0->uEdx & X86_CPUID_STEXT_FEATURE_EDX_STIBP); 1872 #if 0 // Disabled until IA32_ARCH_CAPABILITIES support can be tested 1872 pFeatures->fFlushCmd = RT_BOOL(pSxfLeaf0->uEdx & X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD); 1873 1873 pFeatures->fArchCap = RT_BOOL(pSxfLeaf0->uEdx & X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP); 1874 #endif1875 1874 } 1876 1875 … … 1878 1877 PCCPUMCPUIDLEAF const pMWaitLeaf = cpumR3CpuIdFindLeaf(paLeaves, cLeaves, 5); 1879 1878 if (pMWaitLeaf) 1880 {1881 1879 pFeatures->fMWaitExtensions = (pMWaitLeaf->uEcx & (X86_CPUID_MWAIT_ECX_EXT | X86_CPUID_MWAIT_ECX_BREAKIRQIF0)) 1882 == (X86_CPUID_MWAIT_ECX_EXT | X86_CPUID_MWAIT_ECX_BREAKIRQIF0); 1883 } 1880 == (X86_CPUID_MWAIT_ECX_EXT | X86_CPUID_MWAIT_ECX_BREAKIRQIF0); 1884 1881 1885 1882 /* Extended features. */ … … 2473 2470 CPUMISAEXTCFG enmPcid; 2474 2471 CPUMISAEXTCFG enmInvpcid; 2472 CPUMISAEXTCFG enmFlushCmdMsr; 2475 2473 2476 2474 CPUMISAEXTCFG enmAbm; … … 3274 3272 //| X86_CPUID_STEXT_FEATURE_EDX_IBRS_IBPB RT_BIT(26) 3275 3273 //| X86_CPUID_STEXT_FEATURE_EDX_STIBP RT_BIT(27) 3274 | (pConfig->enmFlushCmdMsr ? X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD : 0) 3276 3275 //| X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP RT_BIT(29) 3277 3276 ; … … 3302 3301 PORTABLE_DISABLE_FEATURE_BIT( 1, pCurLeaf->uEbx, SHA, X86_CPUID_STEXT_FEATURE_EBX_SHA); 3303 3302 PORTABLE_DISABLE_FEATURE_BIT( 1, pCurLeaf->uEcx, PREFETCHWT1, X86_CPUID_STEXT_FEATURE_ECX_PREFETCHWT1); 3303 PORTABLE_DISABLE_FEATURE_BIT_CFG(3, pCurLeaf->uEdx, FLUSH_CMD, X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD, pConfig->enmFlushCmdMsr); 3304 3304 } 3305 3305 … … 3315 3315 if (pConfig->enmInvpcid == CPUMISAEXTCFG_ENABLED_ALWAYS) 3316 3316 pCurLeaf->uEbx |= X86_CPUID_STEXT_FEATURE_EBX_INVPCID; 3317 if (pConfig->enmFlushCmdMsr == CPUMISAEXTCFG_ENABLED_ALWAYS) 3318 pCurLeaf->uEdx |= X86_CPUID_STEXT_FEATURE_EDX_FLUSH_CMD; 3317 3319 break; 3318 3320 } … … 4122 4124 "|PCID" 4123 4125 "|INVPCID" 4126 "|FlushCmdMsr" 4124 4127 "|ABM" 4125 4128 "|SSE4A" … … 4277 4280 AssertLogRelRCReturn(rc, rc); 4278 4281 4282 /** @cfgm{/CPUM/IsaExts/FlushCmdMsr, isaextcfg, true} 4283 * Whether to expose the IA32_FLUSH_CMD MSR to the guest. 4284 */ 4285 rc = cpumR3CpuIdReadIsaExtCfg(pVM, pIsaExts, "FlushCmdMsr", &pConfig->enmFlushCmdMsr, CPUMISAEXTCFG_ENABLED_SUPPORTED); 4286 AssertLogRelRCReturn(rc, rc); 4287 4279 4288 4280 4289 /* AMD: */ … … 4419 4428 } 4420 4429 4430 /* 4431 * Setup MSRs introduced in microcode updates or that are otherwise not in 4432 * the CPU profile, but are advertised in the CPUID info we just sanitized. 4433 */ 4434 if (RT_SUCCESS(rc)) 4435 rc = cpumR3MsrReconcileWithCpuId(pVM); 4421 4436 /* 4422 4437 * MSR fudging. … … 4831 4846 if (!pMsrRange) 4832 4847 { 4848 /** @todo incorrect fWrGpMask. */ 4833 4849 static CPUMMSRRANGE const s_SpecCtrl = 4834 4850 { … … 4844 4860 } 4845 4861 4846 if (pVM->cpum.s.HostFeatures.fArchCap) { 4862 if (pVM->cpum.s.HostFeatures.fArchCap) 4863 { 4847 4864 pLeaf->uEdx |= X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP; 4848 4865 … … 5025 5042 pLeaf = cpumR3CpuIdGetExactLeaf(&pVM->cpum.s, UINT32_C(0x00000007), 0); 5026 5043 if (pLeaf) 5027 /*pVM->cpum.s.aGuestCpuIdPatmStd[7].uEdx =*/ pLeaf->uEdx &= ~(X86_CPUID_STEXT_FEATURE_EDX_IBRS_IBPB | X86_CPUID_STEXT_FEATURE_EDX_STIBP | X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP); 5044 pLeaf->uEdx &= ~( X86_CPUID_STEXT_FEATURE_EDX_IBRS_IBPB | X86_CPUID_STEXT_FEATURE_EDX_STIBP 5045 | X86_CPUID_STEXT_FEATURE_EDX_ARCHCAP); 5028 5046 pVM->cpum.s.GuestFeatures.fSpeculationControl = 0; 5029 5047 Log(("CPUM: ClearGuestCpuIdFeature: Disabled speculation control!\n")); … … 6342 6360 DBGFREGSUBFIELD_RO("IBRS_IBPB\0" "IA32_SPEC_CTRL.IBRS and IA32_PRED_CMD.IBPB", 26, 1, 0), 6343 6361 DBGFREGSUBFIELD_RO("STIBP\0" "Supports IA32_SPEC_CTRL.STIBP", 27, 1, 0), 6362 DBGFREGSUBFIELD_RO("FLUSH_CMD\0" "Supports IA32_FLUSH_CMD", 28, 1, 0), 6344 6363 DBGFREGSUBFIELD_RO("ARCHCAP\0" "Supports IA32_ARCH_CAP", 29, 1, 0), 6345 6364 DBGFREGSUBFIELD_TERMINATOR() -
trunk/src/VBox/VMM/VMMR3/CPUMR3Db.cpp
r76561 r76678 595 595 596 596 /** 597 * Reconciles CPUID info with MSRs (selected ones). 598 * 599 * @returns VBox status code. 600 * @param pVM The cross context VM structure. 601 */ 602 int cpumR3MsrReconcileWithCpuId(PVM pVM) 603 { 604 PCCPUMMSRRANGE papToAdd[10]; 605 uint32_t cToAdd = 0; 606 607 /* 608 * The IA32_FLUSH_CMD MSR was introduced in MCUs for CVS-2018-3646 and associates. 609 */ 610 if (pVM->cpum.s.GuestFeatures.fFlushCmd && !cpumLookupMsrRange(pVM, MSR_IA32_FLUSH_CMD)) 611 { 612 static CPUMMSRRANGE const s_FlushCmd = 613 { 614 /*.uFirst =*/ MSR_IA32_FLUSH_CMD, 615 /*.uLast =*/ MSR_IA32_FLUSH_CMD, 616 /*.enmRdFn =*/ kCpumMsrRdFn_WriteOnly, 617 /*.enmWrFn =*/ kCpumMsrWrFn_Ia32FlushCmd, 618 /*.offCpumCpu =*/ UINT16_MAX, 619 /*.fReserved =*/ 0, 620 /*.uValue =*/ 0, 621 /*.fWrIgnMask =*/ 0, 622 /*.fWrGpMask =*/ ~MSR_IA32_FLUSH_CMD_F_L1D, 623 /*.szName = */ "IA32_FLUSH_CMD" 624 }; 625 papToAdd[cToAdd++] = &s_FlushCmd; 626 } 627 628 /* 629 * Do the adding. 630 */ 631 for (uint32_t i = 0; i < cToAdd; i++) 632 { 633 PCCPUMMSRRANGE pRange = papToAdd[i]; 634 LogRel(("CPUM: MSR/CPUID reconciliation insert: %#010x %s\n", pRange->uFirst, pRange->szName)); 635 int rc = cpumR3MsrRangesInsert(NULL /* pVM */, &pVM->cpum.s.GuestInfo.paMsrRangesR3, &pVM->cpum.s.GuestInfo.cMsrRanges, 636 pRange); 637 if (RT_FAILURE(rc)) 638 return rc; 639 } 640 return VINF_SUCCESS; 641 } 642 643 644 /** 597 645 * Worker for cpumR3MsrApplyFudge that applies one table. 598 646 * -
trunk/src/VBox/VMM/VMMR3/HM.cpp
r76553 r76678 485 485 "|IBPBOnVMEntry" 486 486 "|SpecCtrlByHost" 487 "|L1DFlushOnSched" 488 "|L1DFlushOnVMEntry" 487 489 "|TPRPatchingEnabled" 488 490 "|64bitEnabled" … … 675 677 rc = CFGMR3QueryBoolDef(pCfgHm, "IBPBOnVMEntry", &pVM->hm.s.fIbpbOnVmEntry, false); 676 678 AssertLogRelRCReturn(rc, rc); 679 680 /** @cfgm{/HM/L1DFlushOnSched, bool, true} 681 * CVS-2018-3646 workaround, ignored on CPUs that aren't affected. */ 682 rc = CFGMR3QueryBoolDef(pCfgHm, "L1DFlushOnSched", &pVM->hm.s.fL1dFlushOnSched, true); 683 AssertLogRelRCReturn(rc, rc); 684 685 /** @cfgm{/HM/L1DFlushOnVMEntry, bool} 686 * CVS-2018-3646 workaround, ignored on CPUs that aren't affected. */ 687 rc = CFGMR3QueryBoolDef(pCfgHm, "L1DFlushOnVMEntry", &pVM->hm.s.fL1dFlushOnVmEntry, false); 688 AssertLogRelRCReturn(rc, rc); 689 690 /* Disable L1DFlushOnSched if L1DFlushOnVMEntry is enabled. */ 691 if (pVM->hm.s.fL1dFlushOnVmEntry) 692 pVM->hm.s.fL1dFlushOnSched = false; 677 693 678 694 /** @cfgm{/HM/SpecCtrlByHost, bool} … … 1293 1309 1294 1310 /* 1311 * Check if L1D flush is needed/possible. 1312 */ 1313 if ( !pVM->cpum.ro.HostFeatures.fFlushCmd 1314 || pVM->cpum.ro.HostFeatures.enmMicroarch < kCpumMicroarch_Intel_Core7_Nehalem 1315 || pVM->cpum.ro.HostFeatures.enmMicroarch >= kCpumMicroarch_Intel_Core7_End 1316 || pVM->cpum.ro.HostFeatures.fArchVmmNeedNotFlushL1d 1317 || pVM->cpum.ro.HostFeatures.fArchRdclNo) 1318 pVM->hm.s.fL1dFlushOnSched = pVM->hm.s.fL1dFlushOnVmEntry = false; 1319 1320 /* 1295 1321 * Sync options. 1296 1322 */ … … 1309 1335 pCpuCtx->fWorldSwitcher |= CPUMCTX_WSF_IBPB_ENTRY; 1310 1336 } 1337 if (pVM->cpum.ro.HostFeatures.fFlushCmd && pVM->hm.s.fL1dFlushOnVmEntry) 1338 pCpuCtx->fWorldSwitcher |= CPUMCTX_WSF_L1D_ENTRY; 1311 1339 if (iCpu == 0) 1312 LogRel(("HM: fWorldSwitcher=%#x (fIbpbOnVmExit=%RTbool fIbpbOnVmEntry=%RTbool)\n", 1313 pCpuCtx->fWorldSwitcher, pVM->hm.s.fIbpbOnVmExit, pVM->hm.s.fIbpbOnVmEntry)); 1340 LogRel(("HM: fWorldSwitcher=%#x (fIbpbOnVmExit=%RTbool fIbpbOnVmEntry=%RTbool fL1dFlushOnVmEntry=%RTbool); fL1dFlushOnSched=%RTbool\n", 1341 pCpuCtx->fWorldSwitcher, pVM->hm.s.fIbpbOnVmExit, pVM->hm.s.fIbpbOnVmEntry, pVM->hm.s.fL1dFlushOnVmEntry, 1342 pVM->hm.s.fL1dFlushOnSched)); 1314 1343 } 1315 1344 -
trunk/src/VBox/VMM/include/CPUMInternal.h
r76585 r76678 543 543 int cpumR3DbGetCpuInfo(const char *pszName, PCPUMINFO pInfo); 544 544 int cpumR3MsrRangesInsert(PVM pVM, PCPUMMSRRANGE *ppaMsrRanges, uint32_t *pcMsrRanges, PCCPUMMSRRANGE pNewRange); 545 int cpumR3MsrReconcileWithCpuId(PVM pVM); 545 546 int cpumR3MsrApplyFudge(PVM pVM); 546 547 int cpumR3MsrRegStats(PVM pVM); -
trunk/src/VBox/VMM/include/HMInternal.h
r76585 r76678 437 437 /** Set if indirect branch prediction barrier on VM entry. */ 438 438 bool fIbpbOnVmEntry; 439 /** Set if level 1 data cache should be flushed on VM entry. */ 440 bool fL1dFlushOnVmEntry; 441 /** Set if level 1 data cache should be flushed on EMT scheduling. */ 442 bool fL1dFlushOnSched; 439 443 /** Set if host manages speculation control settings. */ 440 444 bool fSpecCtrlByHost; 441 /** Explicit padding. */442 bool afPadding[2];443 445 444 446 /** Maximum ASID allowed. */
注意:
瀏覽 TracChangeset
來幫助您使用更動檢視器