Join discussions around Veeam community projects, Veeam events, industry and technology news
- 2,002 Topics
- 18,720 Comments
I am a keyboard junkie - use keystrokes rather than reach for the mouse if I can.I love Veeam’s use of the F1 keybind in any dialog window of Veeam Backup Console will bring you to the Helpcenter web page document specific to that dialog.I was playing around in the Job Session - detail window pane and you can use CTRL+A and then CTRL+C or right click any line or just white space in that pane and Copy to Clipboard.The resultant line(s) will be available to be pasted as text to your text/word/email application.Great. 💡 What about in the wider other views of Veeam Backup Console. 🤔When I have been wanting to show someone overview of Repositories I would screengrab... switch to the view and press WIN+SHIFT+S (brings up Windows Snipping Tool) drag select and then switch to application and press CTRL+VMaybe screengrab is not best for say something that might later on want to be searchable. Certainly OCR and the like have become more quicker and much easier available. Take Microsoft OneNote
Happy Friday Everyone! This week I took part in some Kasten partner training, which I’ve got to admit, was pretty awesome.It got me thinking, how ‘ready’ is the data protection community for Kubernetes? For this I shall bring back one of my favourite Community Hub features, THE POLL! So please, answer honestly, it’s anonymous after all! And contribute to the conversation with anything you may have achieved in Kubernetes you might be proud of, whether it’s the creation of your first lab environment, your first data protection policy live & working, the creation of your own blueprint, whatever it is, I’d love to hear it!
Can anyone point me in the direction of a best practices/recommended process for doing maintenance on VBO repos, for example doing hardware maintenance on the VBO repo server, and want to move the data to another storage location, either temporarily or permanent migration?Seem to be running into a lot of Jetdb errors, after we tried to figure out the best process
The roadmap for LTO tape has been extended up to generation 14 which is projected to store up to 576TB of uncompressed data and a unbelievable amount of 1440TB of compressed data on one tape.It will be interesting to see if this can be realized and the capacity be doubled with each of the coming generations...See more here:https://www.lto.org/2022/09/lto-program-announces-extension-to-the-lto-tape-technology-roadmap-to-generation-14/
I’m curious how long you all wait to update to the latest version, firmware, or software release in your production systems. For me, I would say I am on the early end of the spectrum, but I usually try and roll things out to TEST first if I can and verify for a few days. This doesn’t always work as there are times it won’t fully mirror PROD. VMware is one I’ll try and wait a little bit due to some pretty significant issues in the past I have had with host failures. Being extremely critical infrastructure downtime isn’t allowed AT ALL for us. Another thing I use to decide is the support I get. With Veeam, I have had so many good experiences I am confident they will resolve my issue. I also tend to keep my config backups and having to reinstall once or twice it’s quite doable in a pinch.
According to this KB https://www.veeam.com/kb4264 I successfully configured restore to AWS EC2 via endpoints and VPN. So I can restore VM from vsphere to EC2 via VPN.And now I have question :)Is it possible to make restore from EC2 to on premise via VPN, not thounght Internet?
I was wondering how the point system works for this community. I see some people that are brand new at 600 points, then people who have topics and comments at only a few points. I can see when I am active, points being added.Can someone confirm if this is correct. I am assuming if you are not active your points go down month after month. I am also at the assumption when you are a new member you are starting around 500-600 points. I joined a while ago and wasn’t active for a bit, so I’m just trying to make sense of why brand new users have so many more points :)
Trying to clean up data from an old backup copy job, after deleting the backup copy job.It’s now showing under Disks(Orphaned) and Object Storage(Orphaned) - All needed restore points are available from Capacity tier(Azure Object storage). No more restore points are available to move to Capacity tier(because all have been moved already).Still multiple TBs are stored on the Performance tier, but if I select Delete from Disk, it tells me it will delete the dependent restore point also in the Capacity tier…? There are 3 restore points available on Performance tier, and 60+ on capacity tier.I would like to keep all restore points in Capacity tier until I manually delete them according to retention policy. Can I say Yes to this, and still have restore points available from Capacity tier?
One of my customer upgraded the vSAN cluster to 7.0 U3d from 6.7 U3, unfortunately my customer reported the performance of some virtual machines is dropped. We have opened the SR to VMware GSS, but they do not find out the reason. I search some useful information in the internet, it may be related to a known issue exists on vSAN 7.0. I shared the following information to VMware GSS, now we are waiting for confirmation from VMware. https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-7-0-poor-write-performance-and-high-latency-with-NVMe/td-p/2807761https://kb.vmware.com/s/article/88832
Hello Veeam community!I have been trying to determine best approach for a backup migration from one object storage bucket to another and need some help. My customer has a large amount of data in the 100’s of TB’s of backups and a bucket copy will take 30 days which is too long. They also don’t have the space to pull it all back down locally to copy to the new bucket so that option won't work. I am going to have them setup a new capacity tier in SOBR and start new jobs going to the new bucket. But the issue is the old backups on the old bucket are left orphaned. i need to move some but not all of the backups due to retention policy. I was trying to determine which backup objects are related to which backup jobs but can’t determine backup_id via powershell. my thinking is this would allow me to “grab” the backups that need to be moved to the new bucket. Does anyone have a helpful tip on how to get this accomplished? Thank you!
Hi,Please can you clarify if it is possible will Veeam to do a backup copy job directly to say an AWS bucket, or Azure Blob? We want to move away from using tape, but do not want to implement Veeam Connct and all its associated complexity.Kind regardsAndrew Rycroft
Hi.I use restore from on-premise to aws via veeam. It works perfect but I need option to run scripts on aws after restore. Something like user data on aws ( Specify user data to provide commands or a command script to run when you launch your instance. Input is base64 encoded when you launch your instance unless you select the User data has already been base64 encoded check box. )Is there such a functionality?
Happy Friday everyone! So, next month it will have been two years since the Veeam Community Hub was first revealed during VeeamON Update 2020. During this time, there are now over 13.6k users, nearly 20k comments, and over 2.6k topics. Lets take a moment and reflect on those numbers, it’s VEEAMAZING. What’s also amazing is the visibility this platform gives to amazing content, even content that isn’t directly related to Veeam can get hundreds or thousands of viewsExhibit A: Exhibit B:Exhibit C:Now I turn to the community and ask, was there any particular topic or content that you were especially impressed with? Maybe it didn’t get the visibility you thought it deserved at the time and want to re-share it? Whatever the reason, please share your favourite topics below and why you like them! For me, it’s actually this awesome recovery post from @falkob that I wish had got more attention:It’s a scenario we never want, but will inevitably face due to one reason or another, having someone ex
Hello, I attempted to upgrade a v4 Veeam Agent (free version -- this is at home) to the latest v184.108.40.20608 and it failed, saying I needed to install Sql Server 2012 Express localdb, which was already there. (FWIW I successfully did this upgrade on two other similar PCs before trying this one.) I tried uninstalling the old version of Veeam and it still failed. I then uninstalled Sql Server Management Objects and CLR Types, trying to get a clean slate. The installer said it was installing the prerequisites, then died again. The logs say that Sql Server installed successfully, but either Veeam just doesn’t think so, or it can’t see it. The “endpoint<timestamp>.log” is showing an exception that just doesn’t say much of anything.[16.09.2022 14:28:53][INFO] Checking presence of Microsoft Universal C Runtime redist.[16.09.2022 14:28:53][INFO] 'ucrtbase.dll' module is found, redistributable is already installed.[16.09.2022 14:28:53][INFO] Checking presence of Microsoft SQL Server 2012 Sy
This looks pretty cool, anyone attending it: https://www.veeam.com/dreamforce-2022.html?utm_campaign=Country:+Global,Creative:+Animated+Banner,Format:+3rd+party+event,Program:+Salesforce,Title:+Dreamforce&utm_source=GlobalNewsletter&utm_medium=socialteam&utm_content=1661998872 ?
Hi TeamWe have a challenge with one of our customers .They have requested for a monthly full backup to be stored for 12 month retention.They had requested for the same to be stored on cloud for the same considering onpremise storage space .We have added an blob object storage from Azure and created a SOBR with the same as capacity tier and onpremise performance tier. We have configured the data to move after every 5 days from performance to capacity. Configured a monthly full backup with 12 restore points to be run on the last day of the month.Movement of backups donot happen and currently 2 full backups have happened and have stored on the local onpremise repository.Support initially stated that we would need to wait for 2 backups for the previous backup to move but this has not happened.Now they have stated the configuration should be part of a GFS configuration.Nothing substantial interms of configuration has been provided clearly .Need some help or recommendations for the same .Not
I would very much like to have ANY storage target be allowed to be targeted/used for a SOBR capacity or archive tier. As a for instance… the primary performance tier for the SOBR lands on a RAID 10 of either HDD or SSD and then rather than being forced to use object storage for the Capacity and Archive tiers it would be possible to simply offload to another repo that sits on a much slower but more capacity efficient RAID 60 array of disks. That would be much appreciated. That’s just one example of course, but the point is that I would like to have the system admin ultimately determine what they want to qualify as “performance”, “capacity”, and “archive” worthy storage. If they determine that based on their needs that the “performance” tier is a RAID 10 of SSD and their “capacity” tier is a RAID 10 of HDDs, and their “archive” is a RAID 60 of HDDs… let them be the judge of that. Don’t force their hand into using object storage.For all anyone knows, what the backup/storage admin want as
Frequently we need troubleshoot Veeam Backup Server through the network.However, if Windows Firewall is enabled on SO it doesn’t reply ping and echo requests. At this moment so many people act disabling Windows Firewall and mostly times don’t remember to enable it again.So, if you want to allow ping without disabling Windows Firewall you can just run this simple command on cmd:netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
I’ve just been informed by support that Veeam does not allow restoring more than one file per restore job from a NAS archive repository. They suggested I create a feature request, so here goes. Can I suggest adding a “feature” that allows for multiple files and whole folders to be restored from archives? Restoring one file at a time is a ludicrous proposition if you need to restore 1000’s or even 10 files. We’ll need to find an alternative solution as we are already concerned about user requests for future data restoration from our already archived repos.
Happy Friday, Everyone! Today I ask. What was your most disappointing piece of tech?We all know tech is supposed to improve our lives, jobs, etc, but what about when it’s over promised, and under delivered? For me, two things stick in my mind. From a gaming angle, the most disappointing tech I ever witnessed was the Steam controller, the prototype looked amazing and revolutionary, a controller with a built in, full colour screen that could be controlled by the games. Then they scaled back their vision the closer it got to release, ending up a generic controller mainly, with a couple of Haptic Touch circles that were supposed to allow for more adaptive control, but then nothing supported it, now it gathers dust on my shelf! From a more high-tech angle, it’s got to be 3D X-Point, but you probably know it as Intel Optane, even though it was a joint venture between Intel & Micron.Again, a vision was sold that this was it, we’d be swiftly bridging the gap between storage and volatile RA
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.