News, guidelines and various community projects
working in a client env with veeam v11 and 12….just upgraded to v12.we have issues with replication not working (started all of a sudden 3 weeks ago). Support said that it was a known issue in v11 and we needed to upgrade. Tbh I wasn’t convinced as its been working fine for some time.Anyway we upgraded to v12 but replication is still not working.You can see that the copy is seen as in-progress so Veeam is just trying to send data. You can also see on the storeonce side that Veeam files have landed/created with ZERO bytes. It just sits there for hours and you dont see any data being transmitted for some reason…..We do have a call with support but just wondered if you guys had any ideas. We’ve involved HPE and run scripts on the storeonce appliances which are able to send files between the devices (1G files, 10G files etc).
Hi Community,We are using as AV solution Sentinel One. This Solution isn’t standard included in the AntivirusInfo.xml for Secure Restore/SureBackup. So I had to create my own entry, based on very little info and no list of error codes. It’s working but I don’t feel 100% safe as it wasn’t standard included or supported. During a recent Veeam On Tour it was brought to my attention, that using a different AV solution other then the one that’s active on your server bring an extra level of security/check. Now standard on this windows servers os Defender installed.Defender is also standard part of the AntivirusInfo.xml. We don’t want 2 AV solutions add the same time active on our systems. However if we could for example with SureBackup start a PS script that activate Defender at the start of the job and deactivates it again add the end of the task. This would be a great solution to me. Has anyone here this kind of solution in place and would like to share the PS?What about the AntivirusInfo.
Dear Veeam Community,I hope this finds you well. I've encountered an issue after updating to Veeam Version 12 and was hoping someone might have come across a similar problem or can provide some guidance.**Problem Description:** After the update, I am unable to send notification emails. I consistently receive the error message: "Unable to connect to SMTP server because of invalid credentials or connection settings." I am certain of my credentials, having double-checked multiple times.Steps Taken:1. I've tried all configuration variants using both ports 587 and 465.2. When using port 465, I am able to apply the settings, but an error then displays.3. With port 587, the error surfaces after sending the test message.**Mail Server Error Logs:** From the logs, the following sections raised concerns:* `SASL login authentication failed`* `TLS library problem: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate`The error regarding the TLS library is particularly concerning.
I’m in the market for new storage at 2 sites. I’m hoping to get some input from the community here if any of you have a lot of data, or fast storage. Requirements - You need to be able to hit a minimum of 2GB/s on your Veeam jobs to post your solution. I want you to detail your proxy, repo, and storage setup as well. (This is not a competition though) Currently I have 2 V7000’s. I can hit about 2.2GB/s sending data to 6 LTO8 drives. I am hitting about 6k IOPS but the latency starts to increase at this point. I’ve been happy with this storage, (7200RPM disks) but it’s time to upgrade, and I am looking at a few options. Option 1) Exagrid at 2 sites. 10+ EX84 per site - Does anyone here own Exagrid storage? What is your performance like, how about long term restores? The price is decent, I just have concerns about a large scale instant restore, synthetic operations, merges etc. (Those are my current pain points with my current v7000s It’s fast untill I run too many at once) Option 2)
Hey folks,It seems like just yesterday that HashiCorp announced changes to the Terraform licensing plan and already there is a possible alternative on the horizon. The Linux Foundation announced today the formation of the OpenTofu project which is being billed as an alternative to Terraform. As they say, we will wait and see what happens but since LF is involved this is definitely not a dead from the start.You can read more here: https://www.linuxfoundation.org/press/announcing-opentofu
I’ve recently gone through the process of upgrading my Veeam implementation from v11 to v12. Everything has upgraded correctly and I’ve even managed to get a hardened repository server upgraded successfully. My problem is that although all configured jobs are quite happily running and successfully completing under v12, the physical Veeam server running as an agent backup job fails. This was working previously before the upgrade. Is there any solution to this issue.
Hello, I try to install Veeam Agent for Linux on my debian 11but i have two errors that involve failure of Veeam Agent for linux installation 20/09/2023 09:41:17 Failed [172.16.0.18] Failed to install veeam_18.104.22.1680_amd64.deb and blksnap_22.214.171.1240_all.deb packages: E: Impossible de trouver le paquet linux-headers-5.10.0-18-amd64 E: Impossible de trouver de paquet correspondant à l'expression rationnelle « linux-headers-5.10.0-18-amd64 » E: Impossible de trouver de paquet correspondant à l'expression rationnelle « linux-headers-5.10.0-18-amd64 » Failed to invoke rpc command 0:00:14 20/09/2023 09:41:32 Failed [172.16.0.18] Failed to install Veeam Agent for Linux: E: Impossible de trouver le paquet linux-headers-5.10.0-18-amd64 E: Impossible de trouver de paquet correspondant à l'expression rationnelle « linux-headers-5.10.0-18-amd64 » E: Impossible de trouver de paquet correspondant à l'expression rationnelle « linux-headers-5.10.0-18-amd64 » Failed to invoke rpc command my config
NewsI would like to share with you a recipe from my "Script Kitchen" that I'm in the process of matching ingredients to. The question I asked myself: Do you really need to protect your Entra ID tenant(s)? My answer: see below ;) IntroEntra ID stores a variety of settings and policies that are important to the continuity of your business. Anything that compromises your Entra ID configuration can result in an immediate loss of access to data and applications. So how about exporting the information to another location so it can be analyzed and imported or reconfigured if something bad happens? And how about adding a retention to the exported data?Microsoft provides some built-in retention mechanisms (Recycle Bin), but these mechanisms are not enough to meet the needs of most organizations. The Recycle Bin stores deleted objects such as users and groups for 30 days. Some limitations which I associate with it:- After 30 days the deleted objects are permanently deleted and cannot be recovere
Hi Folks Kodekloud will be offering a free week starting on the 25th. There are a ton of new courses. Aside from all the Kubernetes courses there are also AZURE, GCP, AWS etc. Go take a look, take a vacation from work, buy lots of coffee, do courses non stop then go back to work with bigger salary demands or demanding more vacation time in order to run this function in a loop 😁https://kodekloud.com/pages/free-week?utm_source=Facebook&utm_medium=social&utm_campaign=1+Week+Until+KodeKloud%27s+Free+Week&utm_content=announcement
Hi,I have installed Veeam agent on my windows 11 VM. I have a external hard drive connected to the physical host and shared the drive.I have it mapped on my VM.I cannot see it in my options for destinations.Is there any thing I can do to get this working? Thank you in advance.Colin
Now I am learning some Backup & Recovery Solutions (from China). I list out the features comparison with Veeam for reference. Please let me know if I missed any core features for Veeam. Vinchin https://www.vinchin.com/Aishu https://www.aishu.cn/
SQL restart when doing Instant DB recoveryThe Database went offline and in Recovery Pending state. It is currently doing Instant DB Recovery. Is it safe to restart SQL to bring the Database online without affect transactions already in the cache folder? Would it affect the Instant DB recovery process (I don’t think so but better to ask). What other processes should we consider before doing a restart of the SQL? Thanks.
Backup Eagle logoIn my first blog post, I reviewed what Backup Eagle was and how it can monitor your environment. I also went through the installation process as well. Part 1 can be found here –Monitoring with Backup Eagle – Part 1The company graciously granted me an NFR license to test within my homelab – Backup Eagle by Schmitz RZ Consult GmbH.In this post, I am going to cover the Administration Console, including things like –Installing a license file Adding servers for monitoring Dashboards ReportsBackup Eagle is used for Backup Monitoring, Reporting, and Audit & Compliance.Backup Eagle – What it is used forLaunching the console is very straightforward; double-click the desktop icon “BACKUP EAGLE Administration Client,” which launches the console. You can see the console when it first opens below (keeping in mind I already have data showing – a new installation will not have any data until you add servers).Dashboard in the Administration ConsoleAfter you get into the dashboard
At the recent VeeamOn Tour, held in London, I was lucky enough to be asked to sit on a Veeam Vanguard panel, to talk about the topic of data security, which led to the question "Everyone should have a recovery plan, but how do you ensure it is reliable?". Let's go through my points offered on the day, in a little more detailWhen talking to customers about recovery plans, there are 4 points I like to discuss:-Understand your valuable data/core systems & processes"Kind of obvious Craig" I hear you say. Well yes and no…it's not always the usual suspects. Everyone would immediately point to production data as their most valuable data, and it's not wrong. It just there's more to valuable data than just production data.Companies need to be able to take that step back from production and look at their data estate holistically. Yes we want production data/systems protected, up and running ASAP, but what about data/systems that are with upstream or downstream from production data?For exampl
Hello everyone, I take this opportunity to leave here the video library recordings for all the on-demand sessions that are now available! 🎦🤓I take this opportunity to ask you which VMware Explore 2023 Las Vegas announcement seemed most interesting to you?💡 Tip: I would start by looking at the KeyNotes 🕺🏼The VMware Explore 2023 Las Vegas - Video Library recordings
Hello,We have three ESXi servers connected to our Nimble storage using a Brocade Fibre Channel switch, each ESXi server connects to Nimble storage with individual zone as: ESXi1_Nimble, ESXi2_Nimble, ESXi3_Nimbe.A few months after this configuration was done, we added a physical server for Veeam B&R v12 as backup server and repository, which is also a backup proxy server. This is connected to the same FC switch and I created zone between the Veeam B&R server and the Nimble storage as: VeeamBR_NimbleAnd most recently, we added a HPE tape library to this mix for GFS tape backups. Added the library to the FC switch and created a zone between VeeamBR_HPETapeDrive. Each night, the Veeam B&R server runs backup jobs then stores into its local repository (D: drive). Then it runs backup to tape, which writes from the Veeam B&R backup repository into tape drive. I have already configured "Direct storage access" while configuring Backup proxy's Transport mode. The backup jobs do
Hi,After an upgrade to B&R 12 i need to deploy components to my offsite immutable repo.I run “sudo veeamhubrepo”, chose the default user and press enter, but ssh is never enabled so my B&R server cant connect. I recall seeing a note in the header of veeamhubrepo once about SSH being enabled, but now it just showing the IP. Any hints?
Higood daythis is my first post on Veeam communitiesplease I need help with adding HyperV hosts to my Veeam infrastructure I did an upgrade to Veeam 12 about a month ago. I realized that my HyperV hosts were offline. Of four, two are running Windows Server 2022. i saw some things about DCOM Hardening which I don’t want to accept (yet)can anyone help me with measures to resolve this problem?
Hi EveryoneLooking for an advice on the best way to keep backup repository data replicated to off-site for redundancy.The setup is: 2x hardened Linux repositories each on Synology NAS.At the moment the backups are copied using backup copy jobs, however it would not copy all of the backup data chain, just backups created after the creation of the backup copy job.I would like to keep both repositories in full sync in case on of them is lost because of the disaster.
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.