Thursday, October 30, 2014

The Cloudcast DevOps Incident Management with BigPanda [feedly]



----
The Cloudcast DevOps Incident Management with BigPanda
// The Cloudcast (.NET)

Aaron talks with Assaf Resnick (CEO of @BigPanda) about the evolving world of Operations and the new challenges of Incident Response with Continuous Integration/Delivery. Music Credit: Nine Inch Nails (nin.com)
----

Shared via my feedly reader




Sent from my iPad

Wednesday, October 29, 2014

DevOps + IoT - DevOps.comDevOps.com [feedly]





Sent from my iPad

Lower Hosted Services Overhead, Speed Desktops-as-a-Service Implementation Time With Insight from Citrix Summit Service Provider Sessions [feedly]



----
Lower Hosted Services Overhead, Speed Desktops-as-a-Service Implementation Time With Insight from Citrix Summit Service Provider Sessions
// Citrix Blogs

Does your managed services provider business aim for faster hosted services go to market with less implementation and support time? Would you like to increase the margin around Desktops-as-a-Service by streamlining backend processes and designing a more scalable service?  Both of these goals may be achievable with the technical staff you have in place today! Teams can learn how at Citrix Summit. Learn the details…

Read More


----

Shared via my feedly reader




Sent from my iPad

Updating Database Connection Strings in XenDesktop 7.x [feedly]



----
Updating Database Connection Strings in XenDesktop 7.x
// Citrix Blogs

A while ago I did a post about updating connection strings for mirroring. Since then, I've had other requests for connection string manipulations. So I'm going to pull them into one post with an update for 7.6, Availability Groups and script updates. XenDesktop 7.6 XenDesktop 7.6 introduced the Analytics Service which also needs its db connection string updating. The service is responsible for sending information…

Read More


----

Shared via my feedly reader




Sent from my iPad

Why Sign Up to Update Citrix Certifications Right Away? [feedly]



----
Why Sign Up to Update Citrix Certifications Right Away?
// Citrix Blogs

Citrix Incentive Programs Will Maximize Your Potential to Profit! Not only that, our up-to-date certification will help ensure that your Citrix projects run smoothly. Update:  Effective July 15, 2015,  there will be changes to the Citrix Solution Advisor Program.  Legacy certifications such as the CCIA and CCEE for Virtualization will no longer count towards program requirements.   Take action!  Make plans now to equip your…

Read More


----

Shared via my feedly reader




Sent from my iPad

XenApp 6.5 to 7.6 Migration: Selectively Importing Applications [feedly]



----
XenApp 6.5 to 7.6 Migration: Selectively Importing Applications
// Citrix Blogs

The XenApp 6.5 to XenApp 7.6 Migration Tool consists of a series of easy to use PowerShell Scripts These export Farm and Policy data from XenApp 6.5 to XML files. These XML files are then imported via script into an existing XenDesktop 7.6 site. The scripts are available from the XenApp 7.6 product download page, you will have to login with an appropriate Login ID…

Read More


----

Shared via my feedly reader




Sent from my iPad

Should I upgrade from XenApp 7.5 to XenApp 7.6? [feedly]



----
Should I upgrade from XenApp 7.5 to XenApp 7.6?
// Citrix Blogs

After delivering the XenApp 7.6 Upgrade webinar I received a few questions asking if it is a good idea to upgrade from XenApp 7.5 to XenApp 7.6. My first reaction is, "Of course you should. Why wouldn't you?"   But I'm a little biased J You need to ask yourself if the new features within XenApp 7.6 are important enough to upgrade. Look at the…

Read More


----

Shared via my feedly reader




Sent from my iPad

2014 Summer of Interns at OpenStack [feedly]



----
2014 Summer of Interns at OpenStack
// The OpenStack Blog

OpenStack has been a regular participant in community-led internship programs, such as the FOSS Outreach Program and for the
first time this year, the Google Summer of Code. Our wonderful mentors and coordinators have made it possible for OpenStack to have some great interns over the (northern hemisphere) summer. Julie Pichon has helped collect thoughts from the interns. Here is what they have to say about their experience:

Artem Shepelev worked on a scheduler solution based on the non-compute metrics: Working as a part of Google Summer of Code program was very interesting and useful for me. I liked the experience of working with a real project with all its difficulty, size and people involved with it. (Mentors: Yathiraj Udupi, Debojyoti Dutta)

Tzanetos Balitsaris worked on measuring the performance of the deployed Virtual Machines on OpenStack: The experience was really good. Of course one has to sacrifice some things over the summer, but at the end of the day, you have the feeling that it was worth it. (Mentors: Boris Pavlovic and Mikhail Dubov).

Rishabh Kumar: I worked on improving the benchmarking context mechanism in the Rally project. It was a really awesome experience to be part of such a vibrant and diverse community. Getting to know people from all sorts of geographies and the amazing things they are doing humbled me a lot. The code reviews were particularly good with so many people giving their reviews which made me a better programmer. (Mentors: Boris Pavlovic and Mikhail Dubov).

Prashanth Raghu: GSoC was a great opportunity for me to get started with learning about contributing to open source. During my project I was greatly backed by the community which helped me a lot in finally getting my project successfully shipped into the OpenStack Zaqar repository. It was great fun interacting with the team and I would like to thank all those who supported me in this wonderful experience. (Mentor: Alejandro Cabrera).

Ana: I am very grateful for being given the chance to participate in OPW. I had a really positive experience thanks to an amazing mentor, Eoghan Glynn, who explained everything clearly and was enthusiastic about the project and was patient with my many mistakes. I was working on Gnocchi, a new API for Ceilometer; my project was to add moving statistics to the available aggregation functionality. (Mentor: Eoghan Glynn).

Victoria: During my GSoC internship in OpenStack I researched the feasibility of adding AMQP 1.0 as a storage backend for the Messaging and Notifications Service (Zaqar). Since this was not possible, I changed the direction of my research to the transport layer and worked
on creating a POC for it. (Mentor: Flavio Percoco).

Masaru: Awesome experience which is more than I expected at the beginning of my project about VmWare API! Also, great and considerate hackers there, I'm grateful to have participated in GSoC 2014  as one of the students from the OpenStack Foundation. (Mentor: Mr. Arnaud Legendre).

Nataliia: It was a fascinating opportunity. During the internship I worked with the Zaqar team, mainly on Python 3 support, but also with developing api-v1.1. Professionally I learnt a lot, about Python 3 of course, but also from reading and participating in discussions of other interns: about Redis and AMPQ and how to do proper benchmarking. Socially-wise: There was no feeling of being "an intern". The team considers all interns as teammates and treats them equally as any other developer. Anyone could (and actually can — why not?) actively participate in discussions and in making decisions. After finishing it, I helped with other tasks, in particular api-v1.1-response-document-changes. (Mentor: Flavio Percoco, Kurth Griffiths).

OpenStack doesn't plan on stopping there and is already preparing for the next round of the FOSS Outreach Program, this time scheduled during the southern hemisphere summer round starting this December. Stay tuned for more announcements.


----

Shared via my feedly reader




Sent from my iPad

v0.87 Giant released [feedly]



----
v0.87 Giant released
// Ceph

This release will form the basis for the stable release Giant, v0.87.x. Highlights for Giant include:

  • RADOS Performance: a range of improvements have been made in the OSD and client-side librados code that improve the throughput on flash backends and improve parallelism and scaling on fast machines.
  • CephFS: we have fixed a raft of bugs in CephFS and built some basic journal recovery and diagnostic tools. Stability and performance of single-MDS systems is vastly improved in Giant. Although we do not yet recommend CephFS for production deployments, we do encourage testing for non-critical workloads so that we can better guage the feature, usability, performance, and stability gaps.
  • Local Recovery Codes: the OSDs now support an erasure-coding scheme that stores some additional data blocks to reduce the IO required to recover from single OSD failures.
  • Degraded vs misplaced: the Ceph health reports from 'ceph -s' and related commands now make a distinction between data that is degraded (there are fewer than the desired number of copies) and data that is misplaced (stored in the wrong location in the cluster). The distinction is important because the latter does not compromise data safety.
  • Tiering improvements: we have made several improvements to the cache tiering implementation that improve performance. Most notably, objects are not promoted into the cache tier by a single read; they must be found to be sufficiently hot before that happens.
  • Monitor performance: the monitors now perform writes to the local data store asynchronously, improving overall responsiveness.
  • Recovery tools: the ceph_objectstore_tool is greatly expanded to allow manipulation of an individual OSDs data store for debugging and repair purposes. This is most heavily used by our QA infrastructure to exercise recovery code.

UPGRADE SEQUENCING

  • If your existing cluster is running a version older than v0.80.x Firefly, please first upgrade to the latest Firefly release before moving on to Giant. We have not tested upgrades directly from Emperor, Dumpling, or older releases.

    We have tested:

    • Firefly to Giant
    • Dumpling to Firefly to Giant
  • Please upgrade daemons in the following order:
    1. Monitors
    2. OSDs
    3. MDSs and/or radosgw

    Note that the relative ordering of OSDs and monitors should not matter, but we primarily tested upgrading monitors first.

    UPGRADING FROM V0.80X FIREFLY

    • The client-side caching for librbd is now enabled by default (rbd cache = true). A safety option (rbd cache writethrough until flush = true) is also enabled so that writeback caching is not used until the library observes a 'flush' command, indicating that the librbd users is passing that operation through from the guest VM. This avoids potential data loss when used with older versions of qemu that do not support flush.

      leveldb_write_buffer_size = 32*1024*1024 = 33554432 // 32MB leveldb_cache_size = 512*1024*1204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""

      OSDs will still maintain the following osd-specific defaults:

      leveldb_log = ""

    • The 'rados getxattr …' command used to add a gratuitous newline to the attr value; it now does not.
    • The *_kb perf counters on the monitor have been removed. These are replaced with a new set of *_bytes counters (e.g., cluster_osd_kbis replaced by cluster_osd_bytes).
    • The rd_kb and wr_kb fields in the JSON dumps for pool stats (accessed via the ceph df detail -f json-pretty and related commands) have been replaced with corresponding *_bytes fields. Similarly, the total_spacetotal_used, and total_avail fields are replaced with total_bytestotal_used_bytes, and total_avail_bytes fields.
    • The rados df --format=json output read_bytes and write_bytes fields were incorrectly reporting ops; this is now fixed.
    • The rados df --format=json output previously included read_kb and write_kb fields; these have been removed. Please useread_bytes and write_bytes instead (and divide by 1024 if appropriate).
    • The experimental keyvaluestore-dev OSD backend had an on-disk format change that prevents existing OSD data from being upgraded. This affects developers and testers only.
    • mon-specific and osd-specific leveldb options have been removed. From this point onward users should use the leveldb_* generic options and add the options in the appropriate sections of their configuration files. Monitors will still maintain the following monitor-specific defaults:

      leveldb_write_buffer_size = 32*1024*1024 = 33554432 // 32MB leveldb_cache_size = 512*1024*1204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""

      OSDs will still maintain the following osd-specific defaults:

      leveldb_log = ""

    • CephFS support for the legacy anchor table has finally been removed. Users with file systems created before firefly should ensure that inodes with multiple hard links are modified prior to the upgrade to ensure that the backtraces are written properly. For example:
      sudo find /mnt/cephfs -type f -links +1 -exec touch \{\} \;
    • We disallow nonsensical 'tier cache-mode' transitions. From this point onward, 'writeback' can only transition to 'forward' and 'forward' can transition to 1) 'writeback' if there are dirty objects, or 2) any if there are no dirty objects.

    NOTABLE CHANGES SINCE V0.86

    • ceph-disk: use new udev rules for centos7/rhel7 (#9747 Loic Dachary)
    • libcephfs-java: fix fstat mode (Noah Watkins)
    • librados: fix deadlock when listing PG contents (Guang Yang)
    • librados: misc fixes to the new threading model (#9582 #9706 #9845 #9873 Sage Weil)
    • mds: fix inotable initialization (Henry C Chang)
    • mds: gracefully handle unknown lock type in flock requests (Yan, Zheng)
    • mon: add read-only, read-write, and role-definer rols (Joao Eduardo Luis)
    • mon: fix mon cap checks (Joao Eduardo Luis)
    • mon: misc fixes for new paxos async writes (#9635 Sage Weil)
    • mon: set scrub timestamps on PG creation (#9496 Joao Eduardo Luis)
    • osd: erasure code: fix buffer alignment (Janne Grunau, Loic Dachary)
    • osd: fix alloc hint induced crashes on mixed clusters (#9419 David Zafman)
    • osd: fix backfill reservation release on rejection (#9626, Samuel Just)
    • osd: fix ioprio option parsing (#9676 #9677 Loic Dachary)
    • osd: fix memory leak during snap trimming (#9113 Samuel Just)
    • osd: misc peering and recovery fixes (#9614 #9696 #9731 #9718 #9821 #9875 Samuel Just, Guang Yang)

    NOTABLE CHANGES SINCE V0.80.X FIREFLY

    • bash completion improvements (Wido den Hollander)
    • brag: fixes, improvements (Loic Dachary)
    • buffer: improve rebuild_page_aligned (Ma Jianpeng)
    • build: fix build on alpha (Michael Cree, Dmitry Smirnov)
    • build: fix CentOS 5 (Gerben Meijer)
    • build: fix yasm check for x32 (Daniel Schepler, Sage Weil)
    • ceph-brag: add tox tests (Alfredo Deza)
    • ceph-conf: flush log on exit (Sage Weil)
    • ceph.conf: update sample (Sebastien Han)
    • ceph-dencoder: refactor build a bit to limit dependencies (Sage Weil, Dan Mick)
    • ceph-disk: add Scientific Linux support (Dan van der Ster)
    • ceph-disk: do not inadvertantly create directories (Owne Synge)
    • ceph-disk: fix dmcrypt support (Sage Weil)
    • ceph-disk: fix dmcrypt support (Stephen Taylor)
    • ceph-disk: handle corrupt volumes (Stuart Longlang)
    • ceph-disk: linter cleanup, logging improvements (Alfredo Deza)
    • ceph-disk: partprobe as needed (Eric Eastman)
    • ceph-disk: show information about dmcrypt in 'ceph-disk list' output (Sage Weil)
    • ceph-disk: use partition type UUIDs and blkid (Sage Weil)
    • ceph: fix for non-default cluster names (#8944, Dan Mick)
    • ceph-fuse, libcephfs: asok hooks for handling session resets, timeouts (Yan, Zheng)
    • ceph-fuse, libcephfs: fix crash in trim_caps (John Spray)
    • ceph-fuse, libcephfs: improve cap trimming (John Spray)
    • ceph-fuse, libcephfs: improve traceless reply handling (Sage Weil)
    • ceph-fuse, libcephfs: virtual xattrs for rstat (Yan, Zheng)
    • ceph_objectstore_tool: vastly improved and extended tool for working offline with OSD data stores (David Zafman)
    • ceph.spec: many fixes (Erik Logtenberg, Boris Ranto, Dan Mick, Sandon Van Ness)
    • ceph.spec: split out ceph-common package, other fixes (Sandon Van Ness)
    • ceph_test_librbd_fsx: fix RNG, make deterministic (Ilya Dryomov)
    • cephtool: fix help (Yilong Zhao)
    • cephtool: refactor and improve CLI tests (Joao Eduardo Luis)
    • cephtool: test cleanup (Joao Eduardo Luis)
    • clang build fixes (John Spray, Danny Al-Gaaf)
    • client: improved MDS session dumps (John Spray)
    • common: add config diff admin socket command (Joao Eduardo Luis)
    • common: add rwlock assertion checks (Yehuda Sadeh)
    • common: fix dup log messages (#9080, Sage Weil)
    • common: perfcounters now use atomics and go faster (Sage Weil)
    • config: support G, M, K, etc. suffixes (Joao Eduardo Luis)
    • coverity cleanups (Danny Al-Gaaf)
    • crush: clean up CrushWrapper interface (Xioaxi Chen)
    • crush: include new tunables in dump (Sage Weil)
    • crush: make ruleset ids unique (Xiaoxi Chen, Loic Dachary)
    • crush: only require rule features if the rule is used (#8963, Sage Weil)
    • crushtool: send output to stdout, not stderr (Wido den Hollander)
    • doc: cache tiering (John Wilkins)
    • doc: CRUSH updates (John Wilkins)
    • doc: document new upstream wireshark dissector (Kevin Cox)
    • doc: improve manual install docs (Francois Lafont)
    • doc: keystone integration docs (John Wilkins)
    • doc: librados example fixes (Kevin Dalley)
    • doc: many doc updates (John Wilkins)
    • doc: many install doc updates (John Wilkins)
    • doc: misc updates (John Wilkins, Loic Dachary, David Moreau Simard, Wido den Hollander. Volker Voigt, Alfredo Deza, Stephen Jahl, Dan van der Ster)
    • doc: osd primary affinity (John Wilkins)
    • doc: pool quotas (John Wilkins)
    • doc: pre-flight doc improvements (Kevin Dalley)
    • doc: switch to an unencumbered font (Ross Turk)
    • doc: updated simple configuration guides (John Wilkins)
    • doc: update erasure docs (Loic Dachary, Venky Shankar)
    • doc: update openstack docs (Josh Durgin)
    • filestore: disable use of XFS hint (buggy on old kernels) (Samuel Just)
    • filestore: fix xattr spillout (Greg Farnum, Haomai Wang)
    • fix hppa arch build (Dmitry Smirnov)
    • fix i386 builds (Sage Weil)
    • fix struct vs class inconsistencies (Thorsten Behrens)
    • global: write pid file even when running in foreground (Alexandre Oliva)
    • hadoop: improve tests (Huamin Chen, Greg Farnum, John Spray)
    • hadoop: update hadoop tests for Hadoop 2.0 (Haumin Chen)
    • init-ceph: continue starting other daemons on crush or mount failure (#8343, Sage Weil)
    • journaler: fix locking (Zheng, Yan)
    • keyvaluestore: fix hint crash (#8381, Haomai Wang)
    • keyvaluestore: header cache (Haomai Wang)
    • libcephfs-java: build against older JNI headers (Greg Farnum)
    • libcephfs-java: fix gcj-jdk build (Dmitry Smirnov)
    • librados: fix crash on read op timeout (#9362 Matthias Kiefer, Sage Weil)
    • librados: fix lock leaks in error paths (#9022, Paval Rallabhandi)
    • librados: fix pool existence check (#8835, Pavan Rallabhandi)
    • librados: fix rados_pool_list bounds checks (Sage Weil)
    • librados: fix shutdown race (#9130 Sage Weil)
    • librados: fix watch/notify test (#7934 David Zafman)
    • librados: fix watch reregistration on acting set change (#9220 Samuel Just)
    • librados: give Objecter fine-grained locks (Yehuda Sadeh, Sage Weil, John Spray)
    • librados: lttng tracepoitns (Adam Crume)
    • librados, osd: return ETIMEDOUT on failed notify (Sage Weil)
    • librados: pybind: fix reads when 0 is present (#9547 Mohammad Salehe)
    • librados_striper: striping library for librados (Sebastien Ponce)
    • librbd, ceph-fuse: reduce cache flush overhead (Haomai Wang)
    • librbd: check error code on cache invalidate (Josh Durgin)
    • librbd: enable caching by default (Sage Weil)
    • librbd: enforce cache size on read requests (Jason Dillaman)
    • librbd: fix crash using clone of flattened image (#8845, Josh Durgin)
    • librbd: fix error path when opening image (#8912, Josh Durgin)
    • librbd: handle blacklisting during shutdown (#9105 John Spray)
    • librbd: lttng tracepoints (Adam Crume)
    • librbd: new libkrbd library for kernel map/unmap/showmapped (Ilya Dryomov)
    • librbd: store and retrieve snapshot metadata based on id (Josh Durgin)
    • libs3: update to latest (Danny Al-Gaaf)
    • log: fix derr level (Joao Eduardo Luis)
    • logrotate: fix osd log rotation on ubuntu (Sage Weil)
    • lttng: tracing infrastructure (Noah Watkins, Adam Crume)
    • mailmap: many updates (Loic Dachary)
    • mailmap: updates (Loic Dachary, Abhishek Lekshmanan, M Ranga Swami Reddy)
    • Makefile: fix out of source builds (Stefan Eilemann)
    • many many coverity fixes, cleanups (Danny Al-Gaaf)
    • mds: adapt to new Objecter locking, give types to all Contexts (John Spray)
    • mds: add file system name, enabled flag (John Spray)
    • mds: add internal health checks (John Spray)
    • mds: add min/max UID for snapshot creation/deletion (#9029, Wido den Hollander)
    • mds: avoid tight mon reconnect loop (#9428 Sage Weil)
    • mds: boot refactor, cleanup (John Spray)
    • mds: cephfs-journal-tool (John Spray)
    • mds: fix crash killing sessions (#9173 John Spray)
    • mds: fix ctime updates (#9514 Greg Farnum)
    • mds: fix journal conversion with standby-replay (John Spray)
    • mds: fix replay locking (Yan, Zheng)
    • mds: fix standby-replay cache trimming (#8648 Zheng, Yan)
    • mds: fix xattr bug triggered by ACLs (Yan, Zheng)
    • mds: give perfcounters meaningful names (Sage Weil)
    • mds: improve health reporting to monitor (John Spray)
    • mds: improve Journaler on-disk format (John Spray)
    • mds: improve journal locking (Zheng, Yan)
    • mds, libcephfs: use client timestamp for mtime/ctime (Sage Weil)
    • mds: make max file recoveries tunable (Sage Weil)
    • mds: misc encoding improvements (John Spray)
    • mds: misc fixes for multi-mds (Yan, Zheng)
    • mds: multi-mds fixes (Yan, Zheng)
    • mds: OPTracker integration, dump_ops_in_flight (Greg Farnum)
    • mds: prioritize file recovery when appropriate (Sage Weil)
    • mds: refactor beacon, improve reliability (John Spray)
    • mds: remove legacy anchor table (Yan, Zheng)
    • mds: remove legacy discover ino (Yan, Zheng)
    • mds: restart on EBLACKLISTED (John Spray)
    • mds: separate inode recovery queue (John Spray)
    • mds: session ls, evict commands (John Spray)
    • mds: submit log events in async thread (Yan, Zheng)
    • mds: track RECALL progress, report failure (#9284 John Spray)
    • mds: update segment references during journal write (John Spray, Greg Farnum)
    • mds: use client-provided timestamp for user-visible file metadata (Yan, Zheng)
    • mds: use meaningful names for clients (John Spray)
    • mds: validate journal header on load and save (John Spray)
    • mds: warn clients which aren't revoking caps (Zheng, Yan, John Spray)
    • misc build errors/warnings for Fedora 20 (Boris Ranto)
    • misc build fixes for OS X (John Spray)
    • misc cleanup (Christophe Courtaut)
    • misc integer size cleanups (Kevin Cox)
    • misc memory leaks, cleanups, fixes (Danny Al-Gaaf, Sahid Ferdjaoui)
    • misc suse fixes (Danny Al-Gaaf)
    • misc word size fixes (Kevin Cox)
    • mon: add audit log for all admin commands (Joao Eduardo Luis)
    • mon: add cluster fingerprint (Sage Weil)
    • mon: add get-quota commands (Joao Eduardo Luis)
    • mon: add 'osd blocked-by' command to easily see which OSDs are blocking peering progress (Sage Weil)
    • mon: add 'osd reweight-by-pg' command (Sage Weil, Guang Yang)
    • mon: add perfcounters for paxos operations (Sage Weil)
    • mon: avoid creating unnecessary rule on pool create (#9304 Loic Dachary)
    • monclient: fix hang (Sage Weil)
    • mon: create default EC profile if needed (Loic Dachary)
    • mon: do not create file system by default (John Spray)
    • mon: do not spam log (Aanchal Agrawal, Sage Weil)
    • mon: drop mon- and osd- specific leveldb options (Joao Eduardo Luis)
    • mon: ec pool profile fixes (Loic Dachary)
    • mon: fix bug when no auth keys are present (#8851, Joao Eduardo Luis)
    • mon: fix 'ceph df' output for available space (Xiaoxi Chen)
    • mon: fix compat version for MForward (Joao Eduardo Luis)
    • mon: fix crash on loopback messages and paxos timeouts (#9062, Sage Weil)
    • mon: fix default replication pool ruleset choice (#8373, John Spray)
    • mon: fix divide by zero when pg_num is adjusted before OSDs are added (#9101, Sage Weil)
    • mon: fix double-free of old MOSDBoot (Sage Weil)
    • mon: fix health down messages (Sage Weil)
    • mon: fix occasional memory leak after session reset (#9176, Sage Weil)
    • mon: fix op write latency perfcounter (#9217 Xinxin Shu)
    • mon: fix 'osd perf' reported latency (#9269 Samuel Just)
    • mon: fix quorum feature check (#8738, Greg Farnum)
    • mon: fix ruleset/ruleid bugs (#9044, Loic Dachary)
    • mon: fix set cache_target_full_ratio (#8440, Geoffrey Hartz)
    • mon: fix store check on startup (Joao Eduardo Luis)
    • mon: include per-pool 'max avail' in df output (Sage Weil)
    • mon: make paxos transaction commits asynchronous (Sage Weil)
    • mon: make usage dumps in terms of bytes, not kB (Sage Weil)
    • mon: 'osd crush reweight-subtree …' (Sage Weil)
    • mon, osd: relax client EC support requirements (Sage Weil)
    • mon: preload erasure plugins (#9153 Loic Dachary)
    • mon: prevent cache pools from being used directly by CephFS (#9435 John Spray)
    • mon: prevent EC pools from being used with cephfs (Joao Eduardo Luis)
    • mon: prevent implicit destruction of OSDs with 'osd setmaxosd …' (#8865, Anand Bhat)
    • mon: prevent nonsensical cache-mode transitions (Joao Eduardo Luis)
    • mon: restore original weight when auto-marked out OSDs restart (Sage Weil)
    • mon: restrict some pool properties to tiered pools (Joao Eduardo Luis)
    • mon: some instrumentation (Sage Weil)
    • mon: use msg header tid for MMonGetVersionReply (Ilya Dryomov)
    • mon: use user-provided ruleset for replicated pool (Xiaoxi Chen)
    • mon: verify all quorum members are contiguous at end of Paxos round (#9053, Sage Weil)
    • mon: verify available disk space on startup (#9502 Joao Eduardo Luis)
    • mon: verify erasure plugin version on load (Loic Dachary)
    • msgr: avoid big lock when sending (most) messages (Greg Farnum)
    • msgr: fix logged address (Yongyue Sun)
    • msgr: misc locking fixes for fast dispatch (#8891, Sage Weil)
    • msgr: refactor to cleanly separate SimpleMessenger implemenetation, move toward Connection-based calls (Matt Benjamin, Sage Wei)
    • objecter: flag operations that are redirected by caching (Sage Weil)
    • objectstore: clean up KeyValueDB interface for key/value backends (Sage Weil)
    • osd: account for hit_set_archive bytes (Sage Weil)
    • osd: add ability to prehash filestore directories (Guang Yang)
    • osd: add 'dump_reservations' admin socket command (Sage Weil)
    • osd: add feature bit for erasure plugins (Loic Dachary)
    • osd: add header cache for KeyValueStore (Haomai Wang)
    • osd: add ISA erasure plugin table cache (Andreas-Joachim Peters)
    • osd: add local_mtime for use by cache agent (Zhiqiang Wang)
    • osd: add local recovery code (LRC) erasure plugin (Loic Dachary)
    • osd: add prototype KineticStore based on Seagate Kinetic (Josh Durgin)
    • osd: add READFORWARD caching mode (Luis Pabon)
    • osd: add superblock for KeyValueStore backend (Haomai Wang)
    • osd: add support for Intel ISA-L erasure code library (Andreas-Joachim Peters)
    • osd: allow map cache size to be adjusted at runtime (Sage Weil)
    • osd: avoid refcounting overhead by passing a few things by ref (Somnath Roy)
    • osd: avoid sharing PG info that is not durable (Samuel Just)
    • osd: bound osdmap epoch skew between PGs (Sage Weil)
    • osd: cache tier flushing fixes for snapped objects (Samuel Just)
    • osd: cap hit_set size (#9339 Samuel Just)
    • osd: clean up shard_id_t, shard_t (Loic Dachary)
    • osd: clear FDCache on unlink (#8914 Loic Dachary)
    • osd: clear slow request latency info on osd up/down (Sage Weil)
    • osd: do not evict blocked objects (#9285 Zhiqiang Wang)
    • osd: do not skip promote for write-ordered reads (#9064, Samuel Just)
    • osd: fix agent early finish looping (David Zafman)
    • osd: fix ambigous encoding order for blacklisted clients (#9211, Sage Weil)
    • osd: fix bogus assert during OSD shutdown (Sage Weil)
    • osd: fix bug with long object names and rename (#8701, Sage Weil)
    • osd: fix cache flush corner case for snapshotted objects (#9054, Samuel Just)
    • osd: fix cache full -> not full requeueing (#8931, Sage Weil)
    • osd: fix clone deletion case (#8334, Sam Just)
    • osd: fix clone vs cache_evict bug (#8629 Sage Weil)
    • osd: fix connection reconnect race (Greg Farnum)
    • osd: fix crash from duplicate backfill reservation (#8863 Sage Weil)
    • osd: fix dead peer connection checks (#9295 Greg Farnum, Sage Weil)
    • osd: fix discard of old/obsolete subop replies (#9259, Samuel Just)
    • osd: fix discard of peer messages from previous intervals (Greg Farnum)
    • osd: fix dump of open fds on EMFILE (Sage Weil)
    • osd: fix dumps (Joao Eduardo Luis)
    • osd: fix erasure-code lib initialization (Loic Dachary)
    • osd: fix extent normalization (Adam Crume)
    • osd: fix filestore removal corner case (#8332, Sam Just)
    • osd: fix flush vs OpContext (Samuel Just)
    • osd: fix gating of messages from old OSD instances (Greg Farnum)
    • osd: fix hang waiting for osdmap (#8338, Greg Farnum)
    • osd: fix interval check corner case during peering (#8104, Sam Just)
    • osd: fix ISA erasure alignment (Loic Dachary, Andreas-Joachim Peters)
    • osd: fix journal dump (Ma Jianpeng)
    • osd: fix journal-less operation (Sage Weil)
    • osd: fix keyvaluestore scrub (#8589 Haomai Wang)
    • osd: fix keyvaluestore upgrade (Haomai Wang)
    • osd: fix loopback msgr issue (Ma Jianpeng)
    • osd: fix LSB release parsing (Danny Al-Gaaf)
    • osd: fix MarkMeDown and other shutdown races (Sage Weil)
    • osd: fix memstore bugs with collection_move_rename, lock ordering (Sage Weil)
    • osd: fix min_read_recency_for_promote default on upgrade (Zhiqiang Wang)
    • osd: fix mon feature bit requirements bug and resulting log spam (Sage Weil)
    • osd: fix mount/remount sync race (#9144 Sage Weil)
    • osd: fix PG object listing/ordering bug (Guang Yang)
    • osd: fix PG stat errors with tiering (#9082, Sage Weil)
    • osd: fix purged_snap initialization on backfill (Sage Weil, Samuel Just, Dan van der Ster, Florian Haas)
    • osd: fix race condition on object deletion (#9480 Somnath Roy)
    • osd: fix recovery chunk size usage during EC recovery (Ma Jianpeng)
    • osd: fix recovery reservation deadlock for EC pools (Samuel Just)
    • osd: fix removal of old xattrs when overwriting chained xattrs (Ma Jianpeng)
    • osd: fix requesting queueing on PG split (Samuel Just)
    • osd: fix scrub vs cache bugs (Samuel Just)
    • osd: fix snap object writeback from cache tier (#9054 Samuel Just)
    • osd: fix trim of hitsets (Sage Weil)
    • osd: force new xattrs into leveldb if fs returns E2BIG (#7779, Sage Weil)
    • osd: implement alignment on chunk sizes (Loic Dachary)
    • osd: improved backfill priorities (Sage Weil)
    • osd: improve journal shutdown (Ma Jianpeng, Mark Kirkwood)
    • osd: improve locking for KeyValueStore (Haomai Wang)
    • osd: improve locking in OpTracker (Pavan Rallabhandi, Somnath Roy)
    • osd: improve prioritization of recovery of degraded over misplaced objects (Sage Weil)
    • osd: improve tiering agent arithmetic (Zhiqiang Wang, Sage Weil, Samuel Just)
    • osd: include backend information in metadata reported to mon (Sage Weil)
    • osd: locking, sharding, caching improvements in FileStore's FDCache (Somnath Roy, Greg Farnum)
    • osd: lttng tracepoints for filestore (Noah Watkins)
    • osd: make blacklist encoding deterministic (#9211 Sage Weil)
    • osd: make tiering behave if hit_sets aren't enabled (Sage Weil)
    • osd: many important bug fixes (Samuel Just)
    • osd: many many core fixes (Samuel Just)
    • osd: many many important fixes (#8231 #8315 #9113 #9179 #9293 #9294 #9326 #9453 #9481 #9482 #9497 #9574 Samuel Just)
    • osd: mark pools with incomplete clones (Sage Weil)
    • osd: misc erasure code plugin fixes (Loic Dachary)
    • osd: misc locking fixes for fast dispatch (Samuel Just, Ma Jianpeng)
    • osd, mon: add rocksdb support (Xinxin Shu, Sage Weil)
    • osd, mon: config sanity checks on start (Sage Weil, Joao Eduardo Luis)
    • osd, mon: distinguish between "misplaced" and "degraded" objects in cluster health and PG state reporting (Sage Weil)
    • osd, msgr: fast-dispatch of OSD ops (Greg Farnum, Samuel Just)
    • osd, objecter: resend ops on last_force_op_resend barrier; fix cache overlay op ordering (Sage Weil)
    • osd: preload erasure plugins (#9153 Loic Dachary)
    • osd: prevent old rados clients from using tiered pools (#8714, Sage Weil)
    • osd: reduce OpTracker overhead (Somnath Roy)
    • osd: refactor some ErasureCode functionality into command parent class (Loic Dachary)
    • osd: remove obsolete classic scrub code (David Zafman)
    • osd: scrub PGs with invalid stats (Sage Weil)
    • osd: set configurable hard limits on object and xattr names (Sage Weil, Haomai Wang)
    • osd: set rollback_info_completed on create (#8625, Samuel Just)
    • osd: sharded threadpool to improve parallelism (Somnath Roy)
    • osd: shard OpTracker to improve performance (Somnath Roy)
    • osd: simple io prioritization for scrub (Sage Weil)
    • osd: simple scrub throttling (Sage Weil)
    • osd: simple snap trimmer throttle (Sage Weil)
    • osd: tests for bench command (Loic Dachary)
    • osd: trim old EC objects quickly; verify on scrub (Samuel Just)
    • osd: use FIEMAP to inform copy_range (Haomai Wang)
    • osd: use local time for tiering decisions (Zhiqiang Wang)
    • osd: use xfs hint less frequently (Ilya Dryomov)
    • osd: verify erasure plugin version on load (Loic Dachary)
    • osd: work around GCC 4.8 bug in journal code (Matt Benjamin)
    • pybind/rados: fix small timeouts (John Spray)
    • qa: xfstests updates (Ilya Dryomov)
    • rados: allow setxattr value to be read from stdin (Sage Weil)
    • rados bench: fix arg order (Kevin Dalley)
    • rados: drop gratuitous n from getxattr command (Sage Weil)
    • rados: fix bench write arithmetic (Jiangheng)
    • rados: fix {read,write}_ops values for df output (Sage Weil)
    • rbd: add rbdmap pre- and post post- hooks, fix misc bugs (Dmitry Smirnov)
    • rbd-fuse: allow exposing single image (Stephen Taylor)
    • rbd-fuse: fix unlink (Josh Durgin)
    • rbd: improve option default behavior (Josh Durgin)
    • rbd: parallelize rbd import, export (Jason Dillaman)
    • rbd: rbd-replay utility to replay captured rbd workload traces (Adam Crume)
    • rbd: use write-back (not write-through) when caching is enabled (Jason Dillaman)
    • removed mkcephfs (deprecated since dumpling)
    • rest-api: fix help (Ailing Zhang)
    • rgw: add civetweb as default frontent on port 7490 (#9013 Yehuda Sadeh)
    • rgw: add –min-rewrite-stripe-size for object restriper (Yehuda Sadeh)
    • rgw: add powerdns hook for dynamic DNS for global clusters (Wido den Hollander)
    • rgw: add S3 bucket get location operation (Abhishek Lekshmanan)
    • rgw: allow : in S3 access key (Roman Haritonov)
    • rgw: automatically align writes to EC pool (#8442, Yehuda Sadeh)
    • rgw: bucket link uses instance id (Yehuda Sadeh)
    • rgw: cache bucket info (Yehuda Sadeh)
    • rgw: cache decoded user info (Yehuda Sadeh)
    • rgw: check entity permission for put_metadata (#8428, Yehuda Sadeh)
    • rgw: copy object data is target bucket is in a different pool (#9039, Yehuda Sadeh)
    • rgw: do not try to authenticate CORS preflight requests (#8718, Robert Hubbard, Yehuda Sadeh)
    • rgw: fix admin create user op (#8583 Ray Lv)
    • rgw: fix civetweb URL decoding (#8621, Yehuda Sadeh)
    • rgw: fix crash on swift CORS preflight request (#8586, Yehuda Sadeh)
    • rgw: fix log filename suffix (#9353 Alexandre Marangone)
    • rgw: fix memory leak following chunk read error (Yehuda Sadeh)
    • rgw: fix memory leaks (Andrey Kuznetsov)
    • rgw: fix multipart object attr regression (#8452, Yehuda Sadeh)
    • rgw: fix multipart upload (#8846, Silvain Munaut, Yehuda Sadeh)
    • rgw: fix radosgw-admin 'show log' command (#8553, Yehuda Sadeh)
    • rgw: fix removal of objects during object creation (Patrycja Szablowska, Yehuda Sadeh)
    • rgw: fix striping for copied objects (#9089, Yehuda Sadeh)
    • rgw: fix test for identify whether an object has a tail (#9226, Yehuda Sadeh)
    • rgw: fix URL decoding (#8702, Brian Rak)
    • rgw: fix URL escaping (Yehuda Sadeh)
    • rgw: fix usage (Abhishek Lekshmanan)
    • rgw: fix user manifest (Yehuda Sadeh)
    • rgw: fix when stripe size is not a multiple of chunk size (#8937, Yehuda Sadeh)
    • rgw: handle empty extra pool name (Yehuda Sadeh)
    • rgw: improve civetweb logging (Yehuda Sadeh)
    • rgw: improve delimited listing of bucket, misc fixes (Yehuda Sadeh)
    • rgw: improve -h (Abhishek Lekshmanan)
    • rgw: many fixes for civetweb (Yehuda Sadeh)
    • rgw: misc civetweb fixes (Yehuda Sadeh)
    • rgw: misc civetweb frontend fixes (Yehuda Sadeh)
    • rgw: object and bucket rewrite functions to allow restriping old objects (Yehuda Sadeh)
    • rgw: powerdns backend for global namespaces (Wido den Hollander)
    • rgw: prevent multiobject PUT race (Yehuda Sadeh)
    • rgw: send user manifest header (Yehuda Sadeh)
    • rgw: subuser creation fixes (#8587 Yehuda Sadeh)
    • rgw: use systemd-run from sysvinit script (JuanJose Galvez)
    • rpm: do not restart daemons on upgrade (Alfredo Deza)
    • rpm: misc packaging fixes for rhel7 (Sandon Van Ness)
    • rpm: split ceph-common from ceph (Sandon Van Ness)
    • systemd: initial systemd config files (Federico Simoncelli)
    • systemd: wrap started daemons in new systemd environment (Sage Weil, Dan Mick)
    • sysvinit: add support for non-default cluster names (Alfredo Deza)
    • sysvinit: less sensitive to failures (Sage Weil)
    • test_librbd_fsx: test krbd as well as librbd (Ilya Dryomov)
    • unit test improvements (Loic Dachary)
    • upstart: increase max open files limit (Sage Weil)
    • vstart.sh: fix/improve rgw support (Luis Pabon, Abhishek Lekshmanan)

    GETTING CEPH


----

Shared via my feedly reader




Sent from my iPad

The Cloudcast Containerized Continuous Delivery [feedly]



----
The Cloudcast Containerized Continuous Delivery
// The Cloudcast (.NET)

Aaron talks with Avi Cavale (CEO of @BeShippable), about utilizing Docker containers for Continuous Integration/Delivery. Music Credit: Nine Inch Nails (nin.com)
----

Shared via my feedly reader




Sent from my iPad

ChefDK 0.3.2 Released! [feedly]



----
ChefDK 0.3.2 Released!
// Chef Blog

Hi Chefs,

We've released ChefDK 0.3.2. This release fixes a handful of issues with the SSL configuration in ChefDK. Most importantly, we've rolled back the version of the CA Certificate bundle included in ChefDK. A recent update to the certificate bundle removed the root certificate used by AWS, which meant that connections to EC2 APIs and S3 storage would fail certificate validation. More information about this issue can be found in these bug reports:

Also in this Release

  • Fixed a packaging bug that made the CA certificate bundle only readable by root on Unix systems.
  • Re-enabled OpenSSL packaging fixes on Windows that were accidentally disabled in 0.3.0.
  • Add a generator for Policyfiles.

What Happened to 0.3.1?

While investigating the CA certificate issue, we initially believed it to be caused by OpenSSL packaging issues on Windows, which we fixed in the 0.3.1 build. Since this didn't fix the issue, we decided not to announce it or update the download page.

Install it

You can view the full changelog for this release on the ChefDK GitHub page.

Packages are available on the ChefDK downloads page.


----

Shared via my feedly reader




Sent from my iPad

Tuesday, October 28, 2014

OpenStack at LinuxCon / CloudOpen Europe [feedly]



----
OpenStack at LinuxCon / CloudOpen Europe
// The OpenStack Blog

The Foundation had a great time meeting friends old and new as a Silver sponsor of this year's LinuxCon / CloudOpen Europe in Düsseldorf, Germany on October 13-15. A huge thank you to our Community heroes Tomasz Napierala, Oded Nahum, Christian Berendt, Adalberto Medeiros, Marton Kiss, Kamil Swiatkowski, and Jamie Hannaford who helped us staff our busy booth alongside Foundation Community Manager Stefano Maffulli and Marketing Associate Shari Mahrdt. Our swag (150 T-Shirts and Stickers) was gone by the morning of day 2 and we met a great deal of visitors who were very interested in OpenStack.

This year's talks showed how leaders in various different industries are using the power of open source and collaboration for innovation and advancement in technology.

 

Highlights included:

  • VIP Reception at the oldest restaurant in Dusseldorf the "Brauerei Zum Schiffchen", which was open to speakers, sponsors and media, offered traditional German food and made sure every single guest had a glass of beer in their hand at all times
  • The closing party for all attendees at the "Nachresidenz", an architecturally unique club in Düsseldorf.

 

OpenStack related speaking sessions included:

 

photo-3

 

 

LinuxCon-CloudCon-12
LinuxCon-CloudCon-1

 

(Photos by Tomasz Napierala)

 


----

Shared via my feedly reader




Sent from my iPad

Working with Chef Behind Your Firewall [feedly]



----
Working with Chef Behind Your Firewall
// Chef Blog

A self-contained Chef-managed infrastructure has never been easier to configure. The omnibus installers include all the software Chef needs to run on your clients. With a few knife configuration substitutions, or custom bootstrap templates, you are completely in charge of how and where your nodes obtain their chef-client software. As much as we love the Internet, it's common practice to limit the amount of exposure our servers have to it. Limiting ingress to your production and non-production environments via firewalls won't affect the functionality of your Chef deployment, but locking down egress can cause some difficulties.

Installing Chef Client Without Internet Access

You can download Chef software from https://www.getchef.com/download-chef-client/ and via https://www.getchef.com/chef/install.sh. However, if all of your hosts are unable to request urls from the Internet, you have some options on how to get chef-client installed in your environments.

Add the chef-client package to your images

Adding the chef-client package to the images that build your infrastructure is definitely an option if you can regenerate those images easily.

You can choose how much Chef-related material to ship in your images. It's possible to pre-populate system images with not just the chef-client software, but also the organization validator key and a client.rb file to maintain uniformity across all newly instantiated systems. This works well if your systems will all belong to a single organization.

Add chef-client to your post-install scripts

Slightly downstream from pre-populating the image with chef-client is adding chef-client to the post-install scripts of your imaging system. In tools like Kickstart, you can use the %post directive to install chef-client and procure the organization validator keys from a secure location. This approach means that you won't have to build a new operating system base image for every Chef organization your nodes belong to.

In either of these two scenarios, you'll want to keep the images and organization keys securely stored. The validator key in particular has special rights to create new nodes on the Chef server, and should be protected. The chef-client cookbook includes a recipe to delete the validator key from nodes after they join their organization.

Customize your bootstrap options

Another low-friction mechanism for installing chef-client on your nodes is to customize your bootstrap options so the packages are sourced from your repositories.

The knife bootstrap subcommand has a couple of options that allow you to substitute local operations in place of the default https://www.getchef.com/chef/install.sh command.

In a homogeneous environment, using knife_bootstrap_command in your knife.rb allows you to specify a single new command to replace install.sh:
knife[knife_bootstrap_command]  "yum install –y chef-client"
This configuration option overrides the install command for all cases of knife bootstrap.

If you have multiple target operating systems in your environment, you may want to use the template or distro option.

The distro and template options are similar to each other, but the template option allows you to specify an arbitrary path for the file you'll create. The distro option has a limited set of paths that it searches.

The content of the template files can be fairly simple. A template need only contain the command to acquire and install the chef-client software, and create the client.rb and validator.pem files. On the other hand, the template can be more complex if you support platforms that don't yet have chef-client omnibus packages.

The chef-client and Chef DK packages include a number of sample bootstrap templates. Some other samples are also in the chef github repo.

All of these templates have a number of shortcuts for creating the /etc/chef/ directory, copying the validator.pem file, creating the client.rb file, and copying any secrets you might use for encrypted databags.

There is no limit to how many custom templates you can use with knife bootstrap. For example, you can create one custom template for your rpm-based systems, a second for your apt systems, and a third for your Windows systems.

Bootstrapping Windows systems is slightly different from bootstrapping Linux and UNIX systems, but the bootstrap script included with knife windows is a good place to start.

The url value in the template is what you'll want to replace with the download url that works for your environment.

Chef Environment Lifecycle

In order to maintain an environment secluded from the rest of the Internet, you'll likely use a number of local repositories to keep your systems up to date. Chef is just one more component in that maintenance cycle.

Chef will soon provide public repositories that can be mirrored to your repos.  We'll have more information soon on how to make use of these repos, hosted by our friends at PackageCloud. You can then keep your clients as up to date as your site guidelines recommend. The omnibus updater cookbook is one option for keeping your clients updated.

Some useful add-ons to Chef, like chef-vault require that you install additional Ruby gems on your systems. For environments that are partitioned from the Internet, there are several options for mirroring and maintaining an on-premise gem server, including Gem in a Box, at http://guides.rubygems.org/run-your-own-gem-server/

Finally, the yum and apt cookbooks provide lightweight resources and providers (LWRPs) for making repository configuration management easier on your client nodes.

Getting Updates

Chef server, chef-client and Chef DK all use a lot of different open source software components, so we recommend having a regular update schedule for the components you are using. Watch our blog for bug reports, feature enhancements, and updates that might affect your infrastructure.

----

Shared via my feedly reader




Sent from my iPad

How to bootstrap machines, What is the Best Way?DevOps.com [feedly]



----
How to bootstrap machines, What is the Best Way?DevOps.com
http://devops.com/blogs/devops-life/scripts-machine-images-best-way-bootstrap/
----
Shared via my feedly reader




Sent from my iPad

Citrix Director 7.6 Deep-Dive Part 4: Troubleshooting Machines [feedly]



----
Citrix Director 7.6 Deep-Dive Part 4: Troubleshooting Machines
// Citrix Blogs

Overview Machine Details is a new addition to Citrix Director in XenDesktop 7.6 which enables the IT administrators to get more insight about the machines in use. The Machine Details view consists of machine utilization, infrastructure details, number of sessions and hotfix details. With this new addition, the administrators can view machine level details on the Director console itself. After logging into Director, the user…

Read More


----

Shared via my feedly reader




Sent from my iPad

Sandbox. Sessions. Labs. Dive in. The Summit Session Catalog is live! [feedly]



----
Sandbox. Sessions. Labs. Dive in. The Summit Session Catalog is live!
// Citrix Blogs

The Summit Session Catalog is live! Start building your schedule now: this year's catalog has a record number of breakout and technical sessions, and the hands-on labs including the Solutions Demo Sandbox offer more intensive training than ever. Everything has been tailored to your role and organized by content type, to give you the strongest start possible in the new sales cycle. Let's dive in…

Read More


----

Shared via my feedly reader




Sent from my iPad

Manage the unmanageable! Our chief security strategist tells you how [Webinar] [feedly]



----
Manage the unmanageable! Our chief security strategist tells you how [Webinar]
// Citrix Blogs

Register Now! Managing the Unmanageable:  A webinar overview of security with virtualization, containerization and secure networking. *If you are unavailable on October 30, please register. We'll send an on-demand link after the event. Follow @CitrixSecurity If you're concerned about control over the sensitive data Especially data that is subject to governance and regulations on unmanaged devices, please plan to join Kurt Roemer, Citrix's chief security strategist, on…

Read More


----

Shared via my feedly reader




Sent from my iPad

Public Hotfixes: HDX MediaStream Hotfix HDXFlash200WX64008 - For Citrix XenApp 6.5 for Windows Server 2008 R2 - English [feedly]





Sent from my iPad

CloudStack European User Group roundup – October 2014 [feedly]



----
CloudStack European User Group roundup – October 2014
// CloudStack Consultancy & CloudStack...

Our Autumn meetup saw us back at Trend Micro, which is becoming a home from home for us! Great to see the guys there again and many thanks for hosting another great meeting. As usual a good turn-out for the group and some really interesting discussions.

Once we'd all settled Giles kicked off with the CloudStack news, and since our last meet-up there's been a lot of that…

CloudStack 4.4.0 has been released since our last meet-up (GA July 26 2014), and (in what was some fantastic timing), our User Group was held the day after the release of CloudStack 4.4.1 (GA October 24 2014). Giles went through what's new, and the headlines are: a big uplift of features supported with Hyper-V; support for managed storage for root disks (which gives greater flexibility to the 'Storage as a Service' proposition); and support for VMware Distributed Resource Scheduler (DRS). The full list of new features and functionality is in Giles' slides.

It wasn't only CloudStack this time that had a new version released – next up on Giles' agenda was the release of CloudMonkey 5.2.0, adding new functionality such as multiple server profiles and Windows and Linux versions. We at ShapeBlue are particularly proud of this as this was authored by our very own Software Engineering team, led by Rohit Yadav.

Whilst on the subject of the ShapeBlue Software Engineering team, they have also been working hard on something we hope will benefit the community – we will now host repos available to anyone to download and use (http://shapeblue.com/packages). Maintained by ShapeBlue, and hosted by our friends at BT, this will have 3 types of repository. 'Upstream' will contain builds from official Apache CloudStack releases; 'Main' will be ShapeBlue patched Apache CloudStack releases built from our patch repository; and finally, 'Testing' will be a nightly build repository (untested, potentially unstable and not available for all versions). We will also include links to useful documentation such as release documentation, release notes, installation docs, upgrade docs and admin docs.

Sticking with CloudStack development, Giles then talked about a push from within the community to completely change the development approach. Instead of developing on the master, cutting a potentially unstable release branch which then goes through QA and then continuing to develop on the master; the proposal is to maintain a stable master, with development being merged into the master. Watch this space!

Giles then shared some figures and some anecdotal evidence based on his recent experience and travels. One of the problems CloudStack has is awareness, with another platform grabbing all the headlines and maybe shouting the loudest. However, we are seeing the install base grow, the number of committers grow, and lots of people representing lots of enterprises deciding that maybe it's time to look at CloudStack. At the recent LinuxCon Europe there were 5 CloudStack talks and the buzz around the delegates was 'it's time to look at something else'… maybe something that 'just works'. On that – the awareness problem IS being addressed, with an announcement to be made at the end of the next European CloudStack Collaboration Conference (Budapest, November 19-22). Again – watch this space.

Earlier this year something that wasn't big news BECAME big news – namely, the departure from Citrix of 3 big CloudStack names. A lot of blogs and news articles immediately leapt to some pretty wild assumptions, and started claiming that this was the end of CloudStack. This all died away very quickly as a tide of better informed blogs and news articles followed, and Giles summarised here by pointing out that CloudStack is an Apache project and has been for 2.5 years; Citrix are a big contributor but that's now down to c.40% and continues to fall; and that NO organisation has standing in an ASF project. CloudStack does NOT = Citrix.

Before handing over to our first speaker, Giles finished with news of the Conference – 64 different speakers, workshops and training and use-cases. This event is always fantastic and this year will be no different – book your place now! http://events.linuxfoundation.org/events/cloudstack-collaboration-conference-Europe

Our first guest speaker was Laurence Forgiel of BT, talking us through how BT Research have used CloudStack.

Laurence talked us through how they came to use CloudStack – starting with version 2 back in 2011, moving through CloudStack 3.0.5 with CloudPortal 1.4.5 and currently on 4.3 with CloudPortal 2.1. He then talked about how they use their cloud and how he feels that CloudStack 'removes barriers to cloud adoption'. This opened up a lot of lively discussions in the room, and it was really interesting to hear someone talk about using CloudStack from its very early days.
 
Next up was a good friend of ours – Lucian-Paul Burclau (possibly better known as Nux to some of you) – who gave an excellent presentation of creating CloudStack templates using OpenVM.

This was a very informative talk, starting off with the '3 laws' of building templates, namely: a template must be functional, a template must be secure, and a template must be unique. After expanding on this, Lucian led a short debate about the trend for hypervisors to need agents running inside the VM and what should be provided in a template. The future? Better templates, launchers on different Cloudstack public clouds, etc. If you're interested – get involved here contact@openvm.eu.

After a short break it was Geoff Higginbottom's turn with the laser pointer. Geoff is our CTO here at ShapeBlue, and gave a great presentation on how to stand up a CloudStack cloud on your laptop!

Geoff went through the software you need to create the virtual environment (all open source naturally), and then really got into the nuts and bolts going into network design, how to configure the software, XenServer, the management servers, CloudStack zones, pods and clusters and Storage. Geoff even went into the specific commands and even touched on enhanced configuration. For those that were listening and taking fast notes, this was a master class in how easy it is to get CloudStack working. Everybody wanted to get hold of these slides.

Last but by no means least we had Andy Roberts of SolidFire.

Andy's talk started by making the point that public cloud has changed how IT is delivered, and the more we move forwards the less the 'silo'ed infrastructure model' becomes sustainable. To quote Andy – 'We're moving from a world of isolated users, stifled by isolation and manual administration, to one of shared resource pools available in real time using self-service tools'. This is driving a shift in design and delivery, with SolidFire a part of that shift. I'm not going to pitch on their behalf here – but do check out Andy's slides! Andy finished his presentation by talking about all the great work SolidFire are doing with CloudStack integration.

As always, we finished a great day with a few beers and the discussions continued for another few hours…

The next CloudStack European User Group will be our Winter meet-up, in January 2015. Either keep an eye on the mailing lists or right here for more details. Hope to see you there!


----

Shared via my feedly reader




Sent from my iPad

Public CloudStack Packages [feedly]



----
Public CloudStack Packages
// CloudStack Consultancy & CloudStack...

ShapeBlue , today, announced that we will be publicly hosting our public CloudStack repository and SystemVM templates. But why have we decided to do this ?

Access to our CloudStack product patches

Part of ShapeBlue's CloudStack Software Engineering services, we provide a product patching service to our customers where we  take an official CloudStack release that our customer is running in production and apply bugfixes or enhancements. We try do this work publicly and contribute to the upstream CloudStack project, unless requested by the customer to keep it private. After the whole process of building and testing internally, we package a testing APT/YUM repository that is used to verify the build by our team on a test infrastructure that is close to the customer's environment, before we deliver the patch to the customer. 

So, yes, we are now giving non-paying customers access to our CloudStack product patches, along with our commercially supported customers. What we won't do, however, is give any notifications, technical support or assistance on those patches unless an organisation has a CloudStack Infrastructure Support agreement in place

Our commitment to the CloudStack project

The Apache CloudStack project ships CloudStack releases every 4-6 months. Since being accepted in the Apache Incubator, the project has shipped 11 releases including the latest 4.4.1 release. After an official Apache CloudStack version gets released, it's currently only a few individuals in the community who package and host CloudStack releases as APT or YUM repositories publicly. Thats becuase The Apache Software Foundation only distributes code.  But such package hosting sites may not host previous versions of CloudStack and the SystemVM templates, and often times the information on using those CloudStack repositories is not clear, for example which git tag or SHA was used to build those repositories, or if any additional patch(es) or modification(s) was applied on the CloudStack build, or if it's the "oss" build or the "noredist" build.

Since we already have the product patching infrastructure to build, test and package CloudStack, today we're rolling out our public APT/YUM repository and SystemVM template hosting for everyone. We're hosting APT/YUM repository and SystemVM templates for all the CloudStack releases since the 4.2.0 release. All the packages are noredist builds, or what we like to call as the full-version of CloudStack that supports VMWare hypervisor, NetApp storage, Juniper SRX, F5 etc. For more information on using the repository checkout the ShapeBlue packages page: http://shapeblue.com/packages.

The packages repository is GPG signed and shipped under Apache License 2.0 by ShapeBlue, and the underlying infrastructure is kindly provided by BT Cloud Compute.


----

Shared via my feedly reader




Sent from my iPad