Quantcast
Channel: High Availability (Clustering) forum
Viewing all 106 articles
Browse latest View live

Cluster shared volume disappear... STATUS_MEDIA_WRITE_PROTECTED(c00000a2)

$
0
0

Hi all, I am having an issue hopefully someone can help me with. I have recently inherited a 2 node cluster, both nodes are one half of an ASUS RS702D-E6/PS8 so both nodes should be near identical. They are both running Hyper-V Server 2008 R2 hosting some 14 VM's.

Each node is hooked up via cat5e to a PromiseVessRAID 1830i via iSCSI using one of the servers onboard NICs each, whose cluster network is setup as Disabled for cluster use (the way I think it is supposed to be not the way I had originally inherited it) on it's own Class A Subnet and on it's own private physical switch...

The SAN hosts a 30GB CSV Witness Disk and 2 2TB CSV Volumes, one for each node labeled Volume1 and Volume2. Some VHD's on each.

The Cluster Clients connect to the rest of the company via the Virtual ExternalNIC adapters created in Hyper-V manager but physically are off of Intel ET Dual Gigabit adapters wired into our main core switch which is set up with class c subnets.

I also have a crossover cable wired up running to the other ports on the Intel ET Dual Port NICs using yet a third Class B Subnet and is configured in the Failover Cluster Manger as internal so there are 3 ipv4 Cluster networks total.

Even though the cluster passes the validation tests with flying colors I am not convinced all is well. With Hyperv1 or node 1, I can move the CSV's and machines over to hyperv2 or node 2, stop the cluster service on 1 and perform maintenance such as a reboot or install patches if needed. When it reboots or I restart the cluster service to bring it back online, it is well behaved leaving hyperv2 the owner of all 3 CSV's Witness, Volume 1 and 2. I can then pass them back or split them up any which way and at no point is cluster service interrupted or noticed by users, duh I know this is how it is SUPPOSED to work but...

if I try the same thing with Node 2, that is move the witness and volumes to node 1 as owner and migrate all VM's over, stop cluster service on node 2, do whatever I have to do and reboot, as soon as node 2 tries to go back online, it tries to snatch volume 2 back, but it never succeeds and then the following error is logged in cluster event log:

Hyperv1

Event ID: 5120

Source: Microsoft-Windows-FailoverClustering

Task Category: Cluster Shared Volume

The listed message is:Cluster Shared Volume 'Volume2' ('HyperV1 Disk') is no longer available on this node because of 'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Followed 4 seconds later by:

Hyperv1

event ID: 1069

Source: Microsoft-Windows-FailoverClustering

Task Catagory: Resource Control Manager

Message: Cluster Resource 'Hyperv1 Disk in clustered service or application '75d88aa3-8ecf-47c7-98e7-6099e56a097d' failed.

- AND -

2 of the following:

Hyperv1

event ID: 1038

Source: Microsoft-Windows-FailoverClustering

Task Catagory: Physical Disk Resource

Message: Ownership of cluster disk 'HyperV1 Disk' has been unexpectedly lost by this node. Run the Validate a Configuration wizard to check your storage configuration.

Followed 1 second later by another 1069 and then various machines are failing messages.

If you browse to\\hyperv-1\c$\clusterstorage\ or\\hyperv-2\c$\Clusterstorage\, Volume 2 is indeed missing!!

This has caused me to panic a few times as the first time I saw this I thought everything was lost but I can get it back by stopping the service on node 1 or shutting it down, restarting node 2 or the service on node 2 and waiting forever for the disk to list as failed and then shortly thereafter it comes back online. I can then boot node 1 back up and let it start servicing the cluster again. It doesn’t pull the same craziness node 2 does when it comes online; it leaves all ownership with 2 unless I tell I to move.

I am very new to clusters and all I know at this point is this is pretty cool stuff but basically if it is running don’t mess with it is the attitude I have taken with it but there is a significant amount of money tied up in this hardware and we should be able to leverage this as needed, not wonder if it is going to act up again. 

To me it seems for a ‘failover’ cluster it should be way more robust than this...

I can go into way more detail if needed but I didn’t see any other posts on this specific issue no matter what forum I scoured. I’m obviously looking for advice on how to get this resolved as well as advice on whether or not I wired the cluster networks correctly. I am also not sure about what protocols are bound to what nics anymore and what the binding order should be, could this be what is causing my issue?

I have NVSPBIND and NVSPSCRUB on both boxes if needed.

Thanks!

-LW


Hyper-V Cluster issues after applying Win2008 R2 SP1 on a 3 node Cluster!

$
0
0

Hello,

After applying Win2008 R2 Sp1 and running "Validate this Cluster" I get these issues in the report.

"List Potential Cluster Disks"

Disk with identifier bd5a41af has a Persistent Reservation on it. The disk might be part of some other cluster. Removing the disk from validation set

Disk with identifier 2eff8c0d has a Persistent Reservation on it. The disk might be part of some other cluster. Removing the disk from validation set

Disk with identifier bd5a41ad has a Persistent Reservation on it. The disk might be part of some other cluster. Removing the disk from validation set

Disk with identifier c5643d96 has a Persistent Reservation on it. The disk might be part of some other cluster. Removing the disk from validation set

 

After checking the disk details do I dare to run this command to get rid of these without getting issues in the running environment ?

"cluster node node1 /clearpr:5"

Disks eligible for cluster validation

Disks will be referred to by the following cluster disk numbers in subsequent tests  Nodes where the disk is visible  Number of nodes where the disk is visible  

Cluster disk 0 has disk identifier bd5a41ac  All Nodes  3  

 



 "Validate SCSI device Vital Product Data (VPD)"

Failed to get SCSI page 83h VPD descriptors for cluster disk 1 from node NOD1.company.local status 2

Failed to get SCSI page 83h VPD descriptors for cluster disk 1 from node NOD1.company.local status 2


How do I get rid of the above warning ?
I know that SAN storage device (Promise VessRaid 1820i) does support SCSI 3 preservation so..
Also before I uppgraded the Nodes to Win2008 R2 SP1 this issue did not occur in the validation test.

Please advice.

thx /Tony

Hyper-v Live Migration not completing when using VM with large RAM

$
0
0

hi,

i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file is 64GB (by the way do we have to have that, can we get rid of the BIN file?). 

NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."

however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....

why would this be?

many thanks

Steve

Hyper-V 2012 does not scale and is not stable enough for production use WHO has 200+ VM's with stability? Event ID 1146, 1230, 5120

$
0
0

For years now, we have had event ID 1146 crash nodes in the cluster (RHS process crashes).  We have had several paid microsoft cases open, even one with premier.  In fact we have one open currently with zero progress in 72 hours (115012612321318). 

Is anyone really running 200+ machines out there with Hyper-V with any level of stability in production, or do you have a complete host (event id 1146) or volume (event id 5120) outage every month or so?  

We have applied recommended hotfixes, and gone through the configuration many many times.

My only conclusion is that Hyper-V does not scale.  Once we started adding a lot of machines and hosts, we started getting event 5120 (with STATUS_IO_TIMEOUT) which is unacceptable.   Causes a huge slowdown or makes an entire volume inaccessible and impacts EVERY machine in the volume.  The other volumes work when this happens.  In fact, we have a VMware cluster attached to the same san with the same host hardware, and it works flawlessly.  Both use MPIO, so the timeout is caused by Hyper-V.  The load was nearly identical on Vmware and Hyper-V at one time, we had 100 machines on both and the same amount of hosts.   CPU load is tiny, memory is less than 50%, IO uses 55 disk spindles for normal storage and another 55 for fast storage.

I'm more or less asking the community how to fix this since the support is not working, but I'm guessing there is no fix and this is really not production ready.  I would really like to here from ANYONE (non-sales) that is using 200+ machines without big outages.


Clustered file server advertising via netbios?

$
0
0
I have recently upgraded/transitioned from Windows Server 2003 to Windows Server 2008 Enterprise. I have a 2 node cluster with file sharing. When my existing XP clients try to browse the network for the file server they cannot see it. They do see other servers including one of the nodes (not the one currently hosting file sharing) and a couple of virtual servers (one from each of the nodes). They can get to the share if they type the UNC with the share name (\\servername\ does not work sometimes and the have to type \\servername\share to get to the share needed).
What I would like is to control what the users see when they browse to find resources. The browser service and WINS seem to be going away with 2008 technology.
My question is what is recommended by Microsoft for a single domain with limited clients for network resource advertising?
Should my users be using network places or mapped drives or ...
Joel

Just me and me alone keeps this ship afloat!

netft.sys is the cause for the bugchk blue screen on the server Windows 2008 R2 Datacenter

$
0
0

Hi

we have the server geting rebooted by a bugchk error for netft.sysPlease let me know if we have any fix for this issue. i am not sure wht is causing the issue on the server

the server is windows 2008 R2 Datacenter and it is on the HyperV cluster

Thanks in advance

Network Drops for 30 seconds During Hyper-V Live Migration

$
0
0

I have 3 physical Hyper-V hosts setup with clustered storage. I disabled VMQ because I was getting errors when trying to do live migrations. I have also ran the network portion of the cluster validation tests without errors. What happens is basically when I do a live migration from any host to any other host I lose network connectivity to any VM running on those hosts. During this time I have a SQL application that is running and locks up and freezes all the users. Many will have to use task manager to kill the application to get back in or even reboot their machines to free it up.

I have been doing a ton of reading on network settings and configurations and have made no progress. Any help to point me in a direction to get this solved will be appreciated. I need to be able to do Live Migrations on my cluster storage.

Thanks for any help.


Windows 2008 SP2 CLuster- failover or hang Everyday in night

$
0
0

Hi,

I am working on windows 2008 cluster file server and cluster node getting failover or hang everyday in night in between (9:30PM to 6:30AM).

also i contacted with Hardware vendor and after analysis they are saying that it seems like this is cluster services issue. can someone assist on this. below are some event log of the issues.

6/21/2016 9:31:00 PM      TCP/IP failed to establish an outgoing connection because the selected local endpoint was recently used to connect to the same remote endpoint. This error typically occurs when outgoing connections are opened and closed at a high rate, causing all available local ports to be used and forcing TCP/IP to reuse a local port for an outgoing connection. To minimize the risk of data corruption, the TCP/IP standard requires a minimum time period to elapse between successive connections from a given local endpoint to a given remote endpoint.

6/21/2016 9:31:00 PM      The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 7)

INNOXXX04

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 2)

clusres.dll

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(UserData1)

clusres.dll

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 1)

clusres.dll

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 8)

clusres.dll

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 5)

clusres.dll

6/21/2016 9:34:57 PM      The description for Event ID 1230 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

FileServer-(INNOXXX04)(Cluster Disk 3)

clusres.dll

6/21/2016 9:34:57 PM The description for Event ID 1146 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

INNOXXX01

6/21/2016 9:34:57 PM      The description for Event ID 1146 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

INNOXXX01

6/21/2016 9:34:57 PM      The description for Event ID 1146 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.If the event originated on another computer, the display information had to be saved with the event.The following information was included with the event:

INNOXXX01




Cannot create checkpoint when shared vhdset (.vhds) is used by VM - 'not part of a checkpoint collection' error

$
0
0

We are trying to deploy 'guest cluster' scenario over HyperV with shared disks set over SOFS. By design .vhds format should fully support backup feature.

All machines (HyperV, guest, SOFS) are installed with Windows Server 2016 Datacenter. Two HyperV virtual machines are configured to use shared disk in .vhds format (located on SOFS cluster formed of two nodes). SOFS cluster has a share configured for applications and HyperV uses \\sofs_server\share_name\disk.vhds path to SOFS remote storage). Guest cluster is configured with 'File server' role and 'Failover clustering' feature to form a guest cluster. There are two disks configured on each of guest cluster nodes: 1 - private system disk in .vhdx format (OS) and 2 - shared .vhds disk on SOFS.

While trying to make a checkpoint for guest machine, I get following error:

Cannot take checkpoint for 'guest-cluster-node0' because one or more sharable VHDX are attached and this is not part of a checkpoint collection.

Production checkpoints are enabled for VM + 'Create standard checkpoint if it's not possible to create a production checkpoint' option is set. All integration services (including backup) are enabled for VM.

When I delete .vhds disk of shared drive from SCSI controller of VM, checkpoints are created normally (for private OS disk).

It is not clear what is 'checkpoint collection' and how to add shared .vhds disk to this collection. Please advise.

Thanks.

2008R2 2-node cluster - how to configure for multiple instances of SQL 2008R2?

$
0
0

Hi All,

I am looking for some clarity from the cluster perspective on what is required to install multiple instances of SQL 2008R2 on the same server 2008R2 cluster... 

Do I create an new SQL resource group and present the new disks there?  I know historically, we have let the SQL install create the resource group - will the second SQL instance install create a second resource group? 

How about MSDTC?

I have been unable to find any step by step/best practices articles regarding the matter.

Any help is greatly appreciated

Thanks in advance!!!

Can't remove Failover Cluster feature on Windows 2008 R2

$
0
0

Hello

When remove the Failover Cluster feature has following message:

Cannot remove Failover Clusting

This server is an active node in a failover cluster. Uninstalling the Failover CVlustering feature on thos node may impact the availabilty of clustered service and applications. It is recommended that you first evict the server from cluster membership. This can be done through the Failover Cluster Management snap-in by expanding the console tree under Nodes, selecting the node, clicking More Actions, and then clicking Evict.

I'm sure there no cluster formed, so how can I remove it?

 

Thanks !

2008R2 2-node cluster - how to configure for multiple instances of SQL 2008R2?

$
0
0

Hi All,

I am looking for some clarity from the cluster perspective on what is required to install multiple instances of SQL 2008R2 on the same server 2008R2 cluster... 

Do I create an new SQL resource group and present the new disks there?  I know historically, we have let the SQL install create the resource group - will the second SQL instance install create a second resource group? 

How about MSDTC?

I have been unable to find any step by step/best practices articles regarding the matter.

Any help is greatly appreciated

Thanks in advance!!!

Can't remove Failover Cluster feature on Windows 2008 R2

$
0
0

Hello

When remove the Failover Cluster feature has following message:

Cannot remove Failover Clusting

This server is an active node in a failover cluster. Uninstalling the Failover CVlustering feature on thos node may impact the availabilty of clustered service and applications. It is recommended that you first evict the server from cluster membership. This can be done through the Failover Cluster Management snap-in by expanding the console tree under Nodes, selecting the node, clicking More Actions, and then clicking Evict.

I'm sure there no cluster formed, so how can I remove it?

 

Thanks !

2008R2 2-node cluster - how to configure for multiple instances of SQL 2008R2?

$
0
0

Hi All,

I am looking for some clarity from the cluster perspective on what is required to install multiple instances of SQL 2008R2 on the same server 2008R2 cluster... 

Do I create an new SQL resource group and present the new disks there?  I know historically, we have let the SQL install create the resource group - will the second SQL instance install create a second resource group? 

How about MSDTC?

I have been unable to find any step by step/best practices articles regarding the matter.

Any help is greatly appreciated

Thanks in advance!!!

S2D IO TIMEOUT when rebooting node

$
0
0

I am building a 6 Node cluster, 12 6TB drives, 2 4TB Intel p4600 PCIe NVME drives - Xeon Plat 8168/768GB Ram, LSI9008 HBA.

The cluster passes all tests, switches are properly configured and the cluster works well, exceeding 1.1 million IOPS with VMFleet. However, at current patch as of now (April 18 2018) I am experiencing the following scenario:

When no storage job is running, all vdisks listed as healthy and I pause a node and drain it, all is well, until the server actually is rebooted or taken offline. At that point a repair job is initiated, and IO suffers badly, and can even stop all together, causing vdisks to go in to paused state due to IO timeout. (listed as the reason in cluster events) Exacerbating this issue, when the paused node reboots and joins, it will cause the repair job to suspend, stop, then restart (it seems.. tracking this is hard was all storage commands become unresponsive while the node is joining) At this point io is guaranteed to stop on all vdisks at some point for long enough to cause problems, including causing VM reboots. The cluster was initially formed using VMM 2016. I have tried manually creating the vdisks, using single resiliency (3 way mirror), multi tier resiliency, same effect. This behavior was not observed when I did my POC testing last year. Its frankly a deal breaker and unusable, as if I cannot reboot a single node without stopping entirely my workload, I cannot deploy. I'm hoping someone has some info. I'm going to re-install with Server 2016 RTM media and keep it unpatched, and see if the problem remains. However it would be desirable to at least start the cluster at full patch. Any help appreciated. Thanks



Storage Spaces Direct / Cluster Virtual Disk goes offline when rebooting a node

$
0
0

Hello

We have several Hyper-converged einvoronments based on HP ProLiant DL360/DL380.
We have 3 Node and 2 Node Clusters, running with Windows 2016 and actual patches, Firmware Updates done, Witness configured.

The following issue occurs with at least one 3 Node and one 2 Node cluster:
When we put one node into maintenance mode (correctly as described in microsoft docs and checked everything is fine) and reboot that node, it can happen, that one of the Cluster Virtual Disks goes offline. It is always the Disk Performance with the SSD only storage in each environment. The issue occurs only sometimes and not always. So sometimes I can reboot the nodes one after the other several times in a row and everything is fine, but sometimes the Disk "Performance" goes offline. I can not bring this disk back online until the rebooted node comes back online. After the node which was down during maintenance is back online the Virtual Disk can be taken online without any issues.

We have created 3 Cluster Virtual Disks & CSV Volumes on these clusters:
1x Volume with only SSD Storage, called Performance
1x Volume with Mixed Storage (SSD, HDD), called Mixed
1x Volume with Capacity Storage (HDD only), called Capacity

Disk Setup for Storage Spaces Direct (per Host):
- P440ar Raid Controller
- 2 x HP 800 GB NVME (803200-B21)
- 2 x HP 1.6 TB 6G SATA SSD (804631-B21)
- 4 x HP 2 TB 12G SAS HDD (765466-B21)
- No spare Disks
- Network Adapter for Storage: HP 10 GBit/s 546FLR-SFP+ (2 storage networks for redundancy)
- 3 Node Cluster Storage Network Switch: HPE FlexFabric 5700 40XG 2QSFP+ (JG896A), 2 Node Cluster directly connected with each other

Cluster Events Log is showing the following errors when the issue occurs:

Error 1069 FailoverClustering
Cluster resource 'Cluster Virtual Disk (Performance)' of type 'Physical Disk' in clustered role '6ca63b55-1a16-4bb2-ac53-2b23619e258a' failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

Warning 5120 FailoverClustering
Cluster Shared Volume 'Performance' ('Cluster Virtual Disk (Performance)') has entered a paused state because of 'STATUS_NO_SUCH_DEVICE(c000000e)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Error 5150 FailoverClustering
Cluster physical disk resource 'Cluster Virtual Disk (Performance)' failed.  The Cluster Shared Volume was put in failed state with the following error: 'Failed to get the volume number for \\?\GLOBALROOT\Device\Harddisk10\ClusterPartition2\ (error 2)'

Error 1205 FailoverClustering
The Cluster service failed to bring clustered role '6ca63b55-1a16-4bb2-ac53-2b23619e258a' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered role.

Error 1254 FailoverClustering
Clustered role '6ca63b55-1a16-4bb2-ac53-2b23619e258a' has exceeded its failover threshold.  It has exhausted the configured number of failover attempts within the failover period of time allotted to it and will be left in a failed state.  No additional attempts will be made to bring the role online or fail it over to another node in the cluster.  Please check the events associated with the failure.  After the issues causing the failure are resolved the role can be brought online manually or the cluster may attempt to bring it online again after the restart delay period.

Error 5142 FailoverClustering
Cluster Shared Volume 'Performance' ('Cluster Virtual Disk (Performance)') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.

Any hints / inputs appreciated. Had someone something similar?

Thanks in advance

Philippe



windows 2019 s2d cluster failed to start event id 1809

$
0
0

Hi I have lab with insider windows 2019 cluster which I inplace upgraded to rtm version of 2019 server and cluster is shutdown after while and event id 1809 is listed 

This node has been joined to a cluster that has Storage Spaces Direct enabled, which is not validated on the current build. The node will be quarantined.
Microsoft recommends deploying SDDC on WSSD [https://www.microsoft.com/en-us/cloud-platform/software-defined-datacenter] certified hardware offerings for production environments. The WSSD offerings will be pre-validated on Windows Server 2019 in the coming months. In the meantime, we are making the SDDC bits available early to Windows Server 2019 Insiders to allow for testing and evaluation in preparation for WSSD certified hardware becoming available.

Customers interested in upgrading existing WSSD environments to Windows Server 2019 should contact Microsoft for recommendations on how to proceed. Please call Microsoft support [https://support.microsoft.com/en-us/help/4051701/global-customer-service-phone-numbers].

Its kind weird because my s2d cluster is running in VMs is there some registry switch to disable this stupid lock ???


Can't remove Failover Cluster feature on Windows 2008 R2

$
0
0

Hello

When remove the Failover Cluster feature has following message:

Cannot remove Failover Clusting

This server is an active node in a failover cluster. Uninstalling the Failover CVlustering feature on thos node may impact the availabilty of clustered service and applications. It is recommended that you first evict the server from cluster membership. This can be done through the Failover Cluster Management snap-in by expanding the console tree under Nodes, selecting the node, clicking More Actions, and then clicking Evict.

I'm sure there no cluster formed, so how can I remove it?

 

Thanks !

Live Migrate fails with event 21502 (2019-->2016 host)

$
0
0

I have 2016 Functional level cluster with Server 2019 (basically in a process of replacing 2016 host with 2019)

If VM is running on 2019 host I can poweroff, quick migrate to 2016 host, power on & all is good

But Live migration always gives me above error

All I am getting in Event Data is (very descriptive?!):

Live migration of 'Virtual Machine Test' failed.

Nothing else, no reason.

If VM is running on 2016 host I CAN do live migration to 2019 fine! (albeit with errors reported in this thread, but I do NOT have VMM being used!)

vm\service\ethernet\vmethernetswitchutilities.cpp(124)\vmms.exe!00007FF7EA3C2030: (caller: 00007FF7EA40EC65) ReturnHr(138) tid(2980) 80070002 The system cannot find the file specified.
    Msg:[vm\service\ethernet\vmethernetswitchutilities.cpp(78)\vmms.exe!00007FF7EA423BE0: (caller: 00007FF7EA328FEE) Exception(7525) tid(2980) 80070002 The system cannot find the file specified.
] 

Both host are IDENTICAL hardware on same firmware level of every component!

There is NOTHING relating to even attempting migration in local host Hyper-V VMMS/Admin/Operational logs

In Hyper-V High Availability/Admin I get same error but with Even ID 21111

Seb


I am wondering if it is easier to ditch 2019 & stick with 2016 for now

Cluster

$
0
0

Dear Friends,

Have configured cluster successfully, While add disk in failover cluster manager getting error below.

[Window Title]
Information

[Content]
No disks suitable for cluster disks were found.  For diagnostic information about disks available to the cluster, use the Validate a Configuration Wizard to run Storage tests.

[OK]


ITandIT

Viewing all 106 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>