Certainly. If you want to use the cap planning functionality, you can go to a datastore or datacenter resource you can run the report "Datastore Capacity Utilization Report" or use the "Datastore Inventory" Views.
However, sometimes you need to look at things a different way when it comes to storage. Specifically.. what do you want? Average - No. Max/Peak - Not necessarily. How about the 95th percentile - Sure. When I worked for a VAR of storage we'd eval existing environments for the 95th percentile and aim at that for a theoretical 1-1 swap of existing storage vs what is needed. Obviously if something is obscenely undersized with high latency the workload could theoretically be much higher than observed, but for most semi-healthy workloads this worked rather well when you fold in a little headroom+auto-tiering to account for pesky free radicals. Sizing SANs for datapoint peaks just isn't realistic in itself and the data needs a little massage to bring the #s down to reality. Thanks to vC Ops, we can do this.. on the fly.. and interactive. It's call the data distribution analysis widget and it is magnificent at this task. If you want to get an eye on your environment, use a Resource or top-N widget to populate datastores (by iops).. then use an interaction to pass the data to the data distro widget. Adjust the data distro widget to only have the 1st period, no 2nd period, add the percentile visualizations and away you go.