C++14 – Italian C++ Community https://www.italiancpp.org Mon, 24 Aug 2020 13:03:53 +0000 it-IT hourly 1 https://wordpress.org/?v=4.7.18 106700034 Coroutines Internals https://www.italiancpp.org/2016/11/02/coroutines-internals/ Wed, 02 Nov 2016 09:15:30 +0000 http://www.italiancpp.org/?p=6862 Article's code is available on Github

What are coroutines and why should I care?

In The Art of Computer Programming Donald Knuth introduced coroutines as an alternative to the usual function caller/callee idiom where two pieces of code were treated as cooperating equals.
Coroutines can be thought of as language-level constructs that generalize subroutines by providing multiple exit/entry points. A normal subroutine usually has a starting point and one or more exit (return) points. A coroutine provides the ability to enter/exit its control flow at different spots therefore allowing for greater code expressiveness, preservation of automatic states across function calls and nonpreemptive multitasking.

It has to be noted that different programming languages can provide various levels of support for coroutines, e.g.

  • Languages supporting the yield keyword
  • Languages providing full support for async, await, yield

In this article we’ll focus on the former.

Thoughtful use of coroutines can lead to cleaner and more maintainable code in a variety of situations. As a motivating example let’s take for instance the following pseudocode

function start_accepting() {

  socket.async_accept(accept_handler);

  start_accepting();

}

function accept_handler() {

  socket.async_read(read_handler);
 
}

function read_handler(data) {

  request = parse_data(data);

  switch(request) {

    case SEND_DATA: {

      data_to_send = prepare_data_for(request);

      socket.async_write(data_to_send, write_handler);

    } break;

  };

}

function write_handler() {

  ... // continue execution

}

Asynchronous programming is often the preferred way of accomplishing potentially blocking operations without stalling the thread on blocking calls. In the pseudocode above we’re assuming (and omitting for clarity’s sake) that all operations are queued and handled by an event loop manager (a common and powerful idiom in asynchronous applications programming, cfr. boost::asio).

Coroutines allow modeling the same behavior with more readable code

coroutine acceptor() {

  while(true) {

    socket.async_accept_connection(yield); // To event manager

    start_coroutine(serve_request);

  }

}

coroutine serve_request() {

  socket.async_read(data, yield);

  request = parse_data(data);

  switch(request) {

    case SEND_DATA: {

      data_to_send = prepare_data_for(request);

      socket.async_write(data_to_send, yield);

      ... // Continue execution

    } break;

  };

}

 

The code in serve_request() uses a sequential-looking paradigm

Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.

(boost.asio-coroutines)

 

Standard support

At the time of writing this article coroutines didn’t make it for the next standard version (C++17) although recent MSVC versions already ship with an /await option to test experimental support for coroutines.

Coroutines internals

It is important to understand the role of coroutines in providing a collaborative non-preemptive multitasking: spawning a coroutine does not spawn a new thread of execution but coroutines waive execution by yielding to callers (asymmetric coroutines) or to other coroutines (symmetric coroutines) explicitly.

Since coroutines are concepts that have been known for a relatively long time many different techniques (both at language level and system level) have been devised (an interesting suggested reading: Duff’s device based coroutines).

Implementing basic support for asymmetric stackful coroutines can be a rewarding experience in terms of understanding the relationship between coroutines and the way these program flow constructs interact with callers and the underlying memory. Most of the code that will be presented is a pruned-down version of the coroutine implementation by Oliver Kowalke (cfr. boost::coroutine2) available with the boost libraries.

Abstracting away the execution context

In order to implement a coroutine context switch (in its simplest form from the callee to the caller) we need a mechanism to abstract the execution state of our routines and to save/restore stack, registers, CPU flags, etc.

A context switch between threads on x86_64 can be quite costly in terms of performances since it also involves a trip to kernel syscalls. Coroutines context switches in the same thread are far more lightweight and require no kernel interaction. The foundation bricks for userland context switches are contained in the boost/context library. These functionalities provide a solid abstraction that can provide ready-to-use stacks (either basic malloc’d stack buffers or even split stacks on supported architectures) to store our context data or complement system-level constructs (e.g. ucontext on Unix systems, Fibers on Windows). It has to be noted that boost::context used to support Windows fibers due to undocumented TEB-swapping related issues; after fixes were deployed support was dropped since the introduction of execution context v2.

In this article we’ll go for a fcontext implementation in assembler on a x86_64 Unix system (no system calls involved).

Saving the state

When a coroutine yields an fcontext switch should occur, i.e. we should save whatever state the routine was at that point in time and JMP to another context. On a recent Unix system calling conventions, object and executable file formats and other low-level ABI issues are defined by the System V ABI. On a x86_64 architecture the stack grows downwards and parameters to functions are passed in registers rdi, rsi, rcx, r8, r9 + additional stack space if needed. The stack is always 16-byte aligned before a call instruction is issued. Registers rbx, rsp, rbp, r12, r13, r14, and r15 are preserved across function calls while rax, rdi, rsi, rdx, rcx, r8, r9, r10, r11 are scratch registers:

Return value Parameter Registers Scratch Registers Preserved Registers
rax, rdx rdi, rsi, rdx, rcx, r8, r9 + additional_stack rax, rdi, rsi, rdx, rcx, r8, r9, r10, r11 rbx, rsp, rbp, r12, r13, r14, r15

Therefore following in boost::context‘s footsteps a reasonable memory layout is the following

/****************************************************************************************
 *                                                                                      *
 *  ----------------------------------------------------------------------------------  *
 *  |    0    |    1    |    2    |    3    |    4     |    5    |    6    |    7    |  *
 *  ----------------------------------------------------------------------------------  *
 *  |   0x0   |   0x4   |   0x8   |   0xc   |   0x10   |   0x14  |   0x18  |   0x1c  |  *
 *  ----------------------------------------------------------------------------------  *
 *  |        R12        |         R13       |         R14        |        R15        |  *
 *  ----------------------------------------------------------------------------------  *
 *  ----------------------------------------------------------------------------------  *
 *  |    8    |    9    |   10    |   11    |    12    |    13   |    14   |    15   |  *
 *  ----------------------------------------------------------------------------------  *
 *  |   0x20  |   0x24  |   0x28  |  0x2c   |   0x30   |   0x34  |   0x38  |   0x3c  |  *
 *  ----------------------------------------------------------------------------------  *
 *  |        RBX        |         RBP       |         RIP        |       EXIT        |  *
 *  ----------------------------------------------------------------------------------  *
 *                                                                                      *
 ****************************************************************************************/

The EXIT field is going to be left unused for our purposes but it will be left in place anyway.

The first thing we need is to allocate space to store the context data and make sure it has a valid alignment for the architecture we’re dealing with

// Allocate context-stack space
context_stack = (void*)malloc(64_Kb);

std::size_t space = UNIX_CONTEXT_DATA_SIZE + 64;
sp = static_cast<char*>(context_stack) + 64_Kb - space;

sp = std::align(64, UNIX_CONTEXT_DATA_SIZE, sp, space);
assert(sp != nullptr && space >= UNIX_CONTEXT_DATA_SIZE);

boost::context offers both memory-efficient on-demand growing stacks or fixed stack allocations (cfr. boost docs). In this example code we’ll go for a fixed stack allocation.
Since we can’t deal with registers directly in C++ we’ll have to fallback on a pure assembly routine. The GAS backend seems the logical tool of choice for this work. We therefore define an external function to link against our executable with C linkage:

extern "C" fcontext_t jump_to_context(fcontext_t context);

What is an fcontext_t? In a x86_64 world it is just a register’s content:

using fcontext_t = void*;

Luckily for us RIP will have already been set by the fact we’re invoking jump_to_context with a CALL instruction so we get an instruction pointer on the stack for free in our assembly code:

.text
.globl jump_to_context
.type jump_to_context,@function
.align 16
jump_to_context:
    pushq  %rbp  /* save RBP */
    pushq  %rbx  /* save RBX */
    pushq  %r15  /* save R15 */
    pushq  %r14  /* save R14 */
    pushq  %r13  /* save R13 */
    pushq  %r12  /* save R12 */

    /* omissis */

    /* restore RSP (pointing to context-data) from RDI */
    movq  %rdi, %rsp

    popq  %r12  /* restore R12 */
    popq  %r13  /* restore R13 */
    popq  %r14  /* restore R14 */
    popq  %r15  /* restore R15 */
    popq  %rbx  /* restore RBX */
    popq  %rbp  /* restore RBP */

    /* continue... */

.size jump_to_context,.-jump_to_context

/* Mark that we don't need executable stack. */
.section .note.GNU-stack,"",%progbits

Using CMake putting everything together becomes quite easy:

project(simple_crts CXX ASM)
cmake_minimum_required(VERSION 2.8.12)
set (CMAKE_CXX_STANDARD 14)

set_source_files_properties(jump_to_context_x86_64_elf_gas.S 
                            PROPERTIES COMPILE_FLAGS "-x assembler-with-cpp")

add_executable(simple_crts simple_crts.cpp jump_to_context_x86_64_elf_gas.S)
target_link_libraries(simple_crts ${CONAN_LIBS} pthread)

Trampolines to coroutines

Something is missing at this point: we need a valid RIP pointer to the coroutine to jump to. We could enter the coroutine and have another function store this information for us, but there’s a better way which avoids cluttering the coroutine code entirely: using a trampoline function.

Just as in boost::context, we define a trampoline function ourselves which, when jumped to, re-jumps to the caller and saves its context as a pre-stage for the coroutine:

void trampoline(fcontext_t ctx) {

  yield_ctx = jump_to_context(ctx);

  wannabe_coroutine();
}

What we have to do now is a simplified version of the make_context routine to set the first RIP towards the trampoline’s prologue:

// Do make_context's work (simplified)
// Do *NOT* try this at home (or even worse in the office)
void** addr = reinterpret_cast<void**>(static_cast<char*>(sp) +
                                         UNIX_CONTEXT_DATA_RIP_OFFSET);
*addr = reinterpret_cast<void*>(&trampoline);

// In a more complex case there might be additional initialization and
// frame adjustments going on
coroutine_ctx = jump_to_context(sp);

So right now we have a valid trampoline RIP set in place:

/****************************************************************************************
 *                                                                                      *
 *  ----------------------------------------------------------------------------------  *
 *  |    0    |    1    |    2    |    3    |    4     |    5    |    6    |    7    |  *
 *  ----------------------------------------------------------------------------------  *
 *  |   0x0   |   0x4   |   0x8   |   0xc   |   0x10   |   0x14  |   0x18  |   0x1c  |  *
 *  ----------------------------------------------------------------------------------  *
 *  |        R12        |         R13       |         R14        |        R15        |  *
 *  ----------------------------------------------------------------------------------  *
 *  ----------------------------------------------------------------------------------  *
 *  |    8    |    9    |   10    |   11    |    12    |    13   |    14   |    15   |  *
 *  ----------------------------------------------------------------------------------  *
 *  |   0x20  |   0x24  |   0x28  |  0x2c   |   0x30   |   0x34  |   0x38  |   0x3c  |  *
 *  ----------------------------------------------------------------------------------  *
 *  |        RBX        |         RBP       |         RIP        |       EXIT        |  *
 *  ----------------------------------------------------------------------------------  *
 *                                           ^^^^^^^^^^^^^^^^^^^^                       *
 ****************************************************************************************/

This kickstarts the bouncing to/from the trampoline:

.text
.globl jump_to_context
.type jump_to_context,@function
.align 16
jump_to_context:
    pushq  %rbp
    pushq  %rbx 
    pushq  %r15 
    pushq  %r14 
    pushq  %r13 
    pushq  %r12 

    /* store RSP (pointing to context-data) in RAX */
    movq  %rsp, %rax

    movq  %rdi, %rsp

    popq  %r12 
    popq  %r13 
    popq  %r14 
    popq  %r15 
    popq  %rbx 
    popq  %rbp 

    /* restore return-address (must have been put on the new stack) */
    popq  %r8

    /*
       pass the old context as first parameter (if we're headed
       towards a landing function)
    */
    movq  %rax, %rdi

    /* indirect jump to new context */
    jmp  *%r8

.size jump_to_context,.-jump_to_context
.section .note.GNU-stack,"",%progbits

It is important to note that we’re keeping the stack aligned during this entire process (recall that the stack has to be 16-bytes aligned before a call instruction is issued).

The process roughly goes on like this:
coroutines_graph1

It has to be noted that the trampoline function might reserve stack space for its parameters as well. In the code above we allocated 64Kb of heap space to be used as stack space for context operations. So after the first jump the sp automatic variable is no longer reliable. coroutine_ctx should be used instead.

Resuming fcontext

Resuming trampoline’s fcontext requires another call and rip-save and stack pointer adjustment to coroutine_ctx. Trampoline’s old RIP will be available for free after we’ve restored the first 48 bytes of the fcontext.

Execution can afterwards continue to the designated coroutine. At this point the coroutine should be somehow encapsulated to be able to use the yield_ctx context pointer: that is the gateway to our (in an asymmetric view) caller context.

Each time we want to yield execution back to the caller we’ll have to jump_to_context to the yield_ctx:

void yield() {
  yield_ctx = jump_to_context(yield_ctx);
}

void wannabe_coroutine() {
  std::cout << "I wanna be a coroutine when I grow my stack space up\n";
  yield();
  std::cout << "Hooray!\n";
  yield();
}

Notice that we’re also reassigning the variable with the return value provided by jump_to_context. This assignment is not executed until the control flow comes back to the yield() function:

.. save

/* store RSP (pointing to context-data) in RAX */
movq  %rsp, %rax

.. restore

This is a cooperative behavior example: each jump_to_context() invocation from this point onward actually returns fcontext data for the previous invocation.

The rest of the code bounces back and forth through the contexts resulting in the following sequence:

coroutines_graph2

At the end of the day the stack is freed (sounds weird to say) and the program terminates.

Exercise: wrapping up

As a didactic exercise (i.e. do not EVER use this code in a production environment) we can use some metaprogramming to wrap our coroutines and avoid polluting our code with stack adjustments and cleanup boilerplate. Eventually we’d like to end up with code like this

int g_fib = -1;

void fibonacci_gen() {

  int first = 1;
  g_fib = first;
  yield();

  int second = 1;
  g_fib = second;
  yield();

  for (int i = 0; i < 9; ++i) {

    int third = first + second;
    first = second;
    second = third;
    g_fib = third;
    yield();

  }

}

int main() {

  // Outputs the first 10 Fibonacci numbers

  coroutine<void(void)> coro(fibonacci_gen);

  for (int i = 0; i <= 10; ++i) {
      coro.resume();

    if(i) std::cout << g_fib << " ";
  }

}

To do this we create a templated wrapper class:

template class coroutine;

that will handle the stack and trampoline setup for us. One difference from the first example is the need for a wrapper that will handle the trampoline invocation (shields us from implementation-dependent issues):

template 
void call_member_trampoline(coroutine *instance, fcontext_t ctx) {
  instance->trampoline(ctx);
}

The trampoline is therefore modified as follows:

void trampoline(fcontext_t ctx) {

  size_t index = yield_ctx.size() - 1;
  yield_ctx[index] = jump_to_context(this, ctx);

  this->m_fn();
}

The only difference in the jump_to_context() function is in handling its new arity:

/* restore RSP (pointing to context-data) from RSI */
movq  %rsi, %rsp

and the promotion of %rdi from scratch-register to parameter-register (since we’re directly jumping to a destination context’s RIP).

The rest of the code remains largely unchanged.

Back to boost::context

If you’ve followed through the entire article and you made it here, you should by now know what the following boost::context2 program does:

#include <boost/context/all.hpp>
#include <iostream>
#include <array>

namespace ctx = boost::context::detail;

class Coroutine {
public:
  Coroutine() {
    my_context = ctx::make_fcontext(
      stack.data() + stack.size(),
      stack.size(),
      &Coroutine::dispatch
    );
  }
  virtual ~Coroutine() {}

  void operator()() {
    auto transfer_ctx = ctx::jump_fcontext(my_context, this);
    my_context = transfer_ctx.fctx;
  }

protected:
  void yield() {
    auto transfer_ctx = ctx::jump_fcontext(yield_context, 0);
    my_context = transfer_ctx.fctx;
  }

  virtual void call() = 0;

private:
  static void dispatch(ctx::transfer_t coroutine_ptr) {
    Coroutine *coroutine = reinterpret_cast<Coroutine *>(coroutine_ptr.data);
    coroutine->yield_context = coroutine_ptr.fctx;
    coroutine->call();
    while(true)
      coroutine->yield();
  }

private:
  ctx::fcontext_t my_context;
  ctx::fcontext_t yield_context;
  std::array<intptr_t, 66 * 1024> stack;
};

struct A : public Coroutine {
  void call() {
    std::cout << " __________________________________ " << std::endl;
    yield();
    std::cout << "|    _       _       |_|    _| |_  |" << std::endl;
    yield();
    std::cout << "|   |_|     |_|      | |     | |_  |" << std::endl;
  }
};

struct B : public Coroutine {
  void call() {
    std::cout << "|                                  |" << std::endl;
    yield();
    std::cout << "|  _| |_   _| |_      _    |_   _| |" << std::endl;
    yield();
    std::cout << "|                    |_|     |___| |" << std::endl;
    yield();
    std::cout << "|                                  |" << std::endl;
    yield();
    std::cout << "|__________________________________|" << std::endl;
  }
};

struct C : public Coroutine {
  void call() {
    std::cout << "|                     _       _    |" << std::endl;
    yield();
    std::cout << "| |_   _| |_   _|    | |     | |   |" << std::endl;
  }

  void operator++(int) {
    std::cout << "| ++It - The Italian C++ Community |" << std::endl;
    std::cout << "|__________________________________|" << std::endl;
  }
};


int main() {

  A a;
  B b;
  C c;
  for (size_t i = 0; i<10; ++i) {
    a();
    b();
    c();
  }

  c++; // An entire operator overloading to write 'c++'? Worth it!
}

Final words

Coroutines provide a powerful abstraction to offer the same level of concurrency one would get with asynchronous callbacks by offering at the same time a chance to write more maintainable code. At the time of writing this article boost offers context, coroutine and fiber libraries. As we’ve seen boost::context provides the foundation for userland context switches, boost::coroutine2 offers coroutines support (which conceptually have no need of synchronization whatsoever since they implement a nonpreemptive cooperative multitasking) and boost::fiber which builds on boost::context to add a scheduling mechanism: each time a fiber yields, control is given back to a scheduler manager which decides the next execution strategy (cfr. N4024).

As usual it is up to the programmer to carefully choose which abstractions are to be used in a specific context.

References and credits

Special thanks to the ++it community and Oliver Kowalke for providing insights and reviews of parts of this article.

]]>
6862
Italian C++ Conference 2016 https://www.italiancpp.org/event/conference-2016/ Sat, 14 May 2016 06:30:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=5775

Tutte le foto

 

Video delle sessioni

 

Questo evento è piaciuto?

  

Leggi il post sull’evento

 

La prima conferenza italiana completamente dedicata al C++

100+ partecipanti

logo-cpp

Sabato 14 Maggio all’Università Bicocca di Milano

 

Con la partecipazione straordinaria di James McNellis,
dal Visual C++ Team di Redmond!

james-mc-nellis

 

dotnet-podcast

Ascolta un breve podcast sull’evento!

 

Programma della giornata (collegamenti a slides e video):  

quando quanto cosa chi
8.30 – 9.00 30' Registrazione -
9.00 – 9.15 15' Presentazione della giornata Marco Arena
9.30 – 10.30 60' An Introduction to C++ Coroutines James McNellis
10.40 – 11.40 60' Da un grande C++ derivano grandi responsabilità Marco Arena
11.40 – 12.00 20' Coffee Break -
12.00 – 13.00 60' REST e Websocket in C++ diventano semplici e produttivi con il REST SDK Raffaele Rialdi
13.00 – 13.30 30' Pranzo -
13.30 – 14.00 30' Questione di Codice: Come possiamo rendere più semplice la scrittura
il test e il deploy di codice C++ complesso?
RogueWave Software
14.15 – 15.15 60' No More Pointers!
New ideas for teaching and learning modern C++
Marco Foco
15.30 – 16.30 60' Adventures in a Legacy Codebase James McNellis
16.30 – 16.50 20' Break -
16.50 – 17.30 40' Ask Us Everything Tutti gli speakers
17.45 – 18.00 15' Saluti Marco Arena
Sponsorizzato da:
rogue-wave

 

 
Supportato da:
 

oreilly-logo

 
jb-logo

]]>
5775
Codemotion Milano 2015 https://www.italiancpp.org/event/codemotion-milano-2015/ Fri, 20 Nov 2015 23:00:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=5399

Marco Arena ha presentato:

“Perché nel 2015 parliamo ancora di C++?”

 

C__Data_Users_DefApps_AppData_INTERNETEXPLORER_Temp_Saved Images_12273560_1026450354043939_1786806679044325217_o

Feedback alla sessione

]]>
5399
Meetup a Firenze https://www.italiancpp.org/event/meetup-firenze-2015/ Sat, 20 Jun 2015 07:00:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=4497 WP_001479

 
Tutte le foto

  
Leggi il post sull’evento!


Video delle sessioni

 

Questo meetup è piaciuto?

 

#cppFI su Storify

 

Sabato 20 Giugno
“la porti un compilatore a Firenze!”
 
50+ partecipanti.

 banner_meetup_firenze


Perché partecipare:

 

7 Sessioni tecniche

Networking

Sorprese e promozioni esclusive

 

Speaker d’eccezione: Bartosz Milewski!

DSC_1355

 

 

Programma della giornata (nei link slides, demo e video):

 

quando quanto cosa chi
8.30 – 9.00 30' Registrazione -
9.00 – 9.20 20' Presentazione della giornata Marco Arena & Develer
9.20 – 10.05 45' C++ and Why you Care Gian Lorenzo Meocci
10.15 – 10.45 30' Teaching C++14 on Raspberry PI 2 Marco Foco
10.45 – 11.10 25' Break -
11.10 – 12.10 60' Qt e C++: binomio perfetto Luca Ottaviano
12.15 – 13.00 45' Cat's Anatomy: a C++14 Functional Library Nicola Bonelli
13.00 – 14.20 80' Pranzo -
14.20 – 15.20 60' Introduzione al game-development in C++11/C++14 Vittorio Romeo
15.30 – 16.30 60' Solving Constraint Satisfaction Problem using monads Bartosz Milewski
16.30 – 17.00 30' Break
17.00 – 18.00 60' Utilize your CPU power – Cache optimizations and SIMD instructions Mario Mulansky
18.00 – 18.15 15' Saluti Marco Arena & Develer

Pranzo insieme all’Outside Bistrot

 

Guarda i video delle sessioni


 

Sorprese e promozioni esclusive durante l’evento

 

 jb-logo

Estrazione di licenze gratuite JetBrains per CLion e ReSharper C++

 

 

oreilly-logo

Estrazione di 4 Libri O’Reilly e diffusione di codici per sconti fino al 50%

 

 

cppdepend

Tutti i partecipanti al meetup hanno diritto ad una licenza gratuita di CppDepend!

 

 

 

Abbiamo organizzato questo meetup con:

logo-trasparente-700

]]>
4497
Cat: a C++14 functional library https://www.italiancpp.org/2015/04/29/cat-a-c14-functional-library/ https://www.italiancpp.org/2015/04/29/cat-a-c14-functional-library/#comments Wed, 29 Apr 2015 14:20:36 +0000 http://www.italiancpp.org/?p=4682 haskell-logo

The rise of functional programming has affected many programming languages, and C++ could not escape from it. The need of paradigms like partial application (via currying) and functional composition are now a reality also in C++, and the spread of libraries like FIT and FTL is an evidence.

Cat is a C++14 library, inspired by Haskell. Cat aims at pushing the functional programming approach in C++ to another level.

The added value of Cat is twofold. On one hand it works for filling the gap in the language with respect to functional programming. For this purpose, some utility functions and classes are provided (callable wrappers with partial application, sections, utilities for tuples, extended type traits, alternative forwarding functions, etc).

On the other hand Cat promotes the use of generic programming with type classes, inspired by Category Theory. A framework for building type-classes along with a dozen of them (Functor, Applicative, Monoids, Monads, Read, Show, to mention just a few) and the related instances dropped in the context of C++ are included in the library.

Cat is distributed under the MIT license and it’s available for download at the address https://cat.github.io.
]]>
https://www.italiancpp.org/2015/04/29/cat-a-c14-functional-library/feed/ 2 4682
Meetup a Pordenone https://www.italiancpp.org/event/meetup-pordenone-2015/ https://www.italiancpp.org/event/meetup-pordenone-2015/#comments Sat, 07 Feb 2015 08:00:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=3789  20150207_091856

Tutte le foto

 

Leggi il post sul meetup

 

Questo meetup è piaciuto?

 

Sabato 7 Febbraio ci siamo incontrati a Pordenone!
 
80+ partecipanti.

banner_meetup_pordenone

 

Organizzato con:

innova

Ospitati da:

pordenone

 

 

Programma della giornata (collegamenti a dettagli e slides):

  

Track principale:

orario cosa speaker
8.00 – 9.00 Registrazione -
9.00 – 9.15 Presentazione della giornata Marco Parenzan
9.15 – 10.45 Keynote:
Perché nel 2015 parliamo ancora di C++?
Marco Arena
10.45 – 11.10 Break -
11.10 – 12.10 C++11 & C++14 Overview Gian Lorenzo Meocci
12.15 – 13.15 C++ from '90 to '14 Gianluca Padovani &
Marco Foco
13.15 – 14.10 Pranzo -

 

Track 1 pomeriggio:

 

orario cosa speaker
14.10 – 15.10 Introduzione al framework Qt Luca Ottaviano
15.20 – 16.20 L’accesso ai dati nell’epoca moderna.
Sql++11 e ODB
Nicola Gigante
16.20 – 16.40 Break -
16.40 – 17.40 L’Unreal Engine 4 Matteo Bertello
 
Track 2 pomeriggio:

 

orario cosa speaker
14.10 – 15.10 Chromium as a Framework Raffaele Intorcia &
Tiziano Cappellari
15.20 – 16.20 C++ in Windows Phone Apps Mirco Vanini
16.20 – 16.40 Break -
16.40 – 17.40 C++ nello sviluppo iOS/Android Giuseppe Merlino &
Lucio Cosmo

 

Track principale:

 

orario cosa speaker
17.45 – 18.00 Saluti e Fine Evento Marco Arena &
Marco Parenzan

 

 

Aziende che hanno supportato l’evento:

 

ServiziCGN_WebSite

 

]]>
https://www.italiancpp.org/event/meetup-pordenone-2015/feed/ 1 3789
Meetup a Bologna https://www.italiancpp.org/event/meetup-bologna-2014/ Sat, 08 Nov 2014 08:00:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=3476 meetup-bologna-panoramica

Tutte le foto

 

Leggi il post sul meetup

 

Questo meetup è piaciuto?

 

Sabato 8 Novembre a Bologna si parlava C++!
 
50+ partecipanti.

banner_meetup_bologna

 

 

Sala Consiliare “Quartiere Porto”, a meno di 2 km dalla Stazione Centrale!

 

Programma della giornata (collegamenti a dettagli e slides):

 

orario cosa speaker
9.00 – 9.30 Registrazione -
9.30 – 10.00 Presentazione della giornata Marco Arena
10.15 – 11.15 Convincetemi ad usare il C++14 Roberto Bettazzoni
11.15 – 11.30 Pausa -
11.30 – 13.00 Seeing monads in C++ Bartosz Milewski
13.00 – 14.00 Pranzo Insieme -
14.00 – 14.30 Meet the Rule of Zero Marco Arena
14.40 – 15.40 Going native with less coupling: Dependency Injection in C++ Daniele Pallastrelli
15.40 – 15.50 Pausa -
15.50 – 16.20 Sviluppo di un framework di unit-test C++ Gianfranco Zuliani
16.30 – 17.15 Ask Us Everything Tutti
17.15 – 17.30 Saluti e Fine Evento Marco Arena

]]>
3476
Community Days 2014 – Roma https://www.italiancpp.org/event/community-days-2014-roma/ Tue, 23 Sep 2014 22:00:00 +0000 http://www.italiancpp.org/?post_type=tribe_events&p=3346 cdays14-roma

Tutte le foto

 

cd14_logo

 

Abbiamo curato un’intera track C++ per 20+ persone!

 

Ospite d’eccezione: Bartosz Milewski!

 

Il nuovo C++? Torniamo alle basi

Marco Arena

 

C++, un linguaggio evoluto per software moderno

Nicola Bonelli & Paolo Severini

 

Functional techniques in C++

Bartosz Milewski

 

App Windows Phone in C++

Mirco Vanini

 

Guiarda i video delle sessioni!

 

]]>
3346
Una sbirciatina al C++14 https://www.italiancpp.org/2014/02/03/una-sbirciatina-al-cpp14/ Mon, 03 Feb 2014 11:40:45 +0000 http://www.italiancpp.org/?p=2614 Il C++14 è il nome informale della prossima revisione dello standard C++ ISO/IEC che potrebbe essere ufficializzata quest’anno. La bozza approvata dal comitato ISO – N3797 – è stata pubblicata il 15 Maggio 2013.

In questo breve articolo vediamo alcune delle features più interessanti già disponibili su Clang (ogni argomento ha il link al relativo paper/draft). Vi diamo la possibilità di provare alcuni esempi direttamente nell’articolo. Qualsiasi autore può utilizzare nei propri articoli questi “snippet compilabili”, quindi se avete voglia di scrivere un articolo del genere fatevi sotto!

Generic lambdas & initialized capture

Scrivendo una lambda, quante volte  vi siete chiesti: “ma perché il compilatore non deduce il tipo dei parametri automaticamente?!” Per esempio in un for_each:

vector<int> v;
for_each(begin(v), end(v), [](int i) {
   cout << i << endl;
});

Per prima cosa il compilatore sa quale tipo ci va, ma non solo: la stessa lambda (a parte il parametro) potrebbe essere riutilizzata altrove, per stampare qualsiasi oggetto che supporti l’operator<<(ostream&):

auto printer = [](string s) {
   cout << s << endl;
};

In C++14 è possibile creare delle lambda generiche (dette anche polimorfe), tramite auto:

#include <iostream>

using namespace std;

int main()
{
   auto printer = [](auto value) {
      cout << "PRINTING: " << value << endl;
   };

   printer(10);
   printer("hello");
}

Un lambda stateless (con cattura “vuota”), proprio come nel C++11, si può castare ad un puntatore a funzione appropriato:

auto printer = [](auto value) {
    cout << value << endl;
};

void(*printIntFn)(int) = printer;

Un altro limite delle lambda riguarda la cattura che è consentita solo per copia e per reference, escludendo, di fatto, una cattura “by-move”. Si possono adottare alcuni workaround – come bind oppure nascondere una move sotto una copy – ma si tratta, appunto, solo di trucchi per eludere un limite del linguaggio.

Nel C++14 la sintassi della capture-list permette delle vere e proprie inizializzazioni di variabili (questa nuova caratteristica è chiamata, infatti, initialized lambda capture):

#include <iostream>
#include <memory>
#include <vector>
#include <numeric>
using namespace std;

int main()
{
    unique_ptr<int> ptr {new int{10}};

    auto closure = [ptr = move(ptr)]{
        cout << "From closure: " << *ptr << endl;
    };

    closure();

    if (ptr) // is ptr valid?
        cout << "ops...move didn't work..." << endl;

    vector<int> v{1,2,3,4,5};

    auto printSum = [sum = accumulate(begin(v), end(v), 0)]{
        cout << "Sum is: " << sum << endl;
    };

    printSum();
}

Suggeriamo di non abusare di questa notazione…Non è il caso di scrivere tutto il codice tra [ ] 🙂

Notevole di questa sintassi è il poter creare delle closure con “full ownership”: nell’esempio di prima, se avessimo introdotto uno shared_ptr la lambda avrebbe in qualche modo condiviso la proprietà del puntatore con lo scope in cui è definita. Al contrario, muovendo uno unique_ptr dentro la lambda si sta completamente trasferendo la proprietà del puntatore all’interno della stessa. Non mancano casi in cui questa nuova sintassi farà davvero comodo, specialmente in ambito multi-thread.

Return type deduction for normal functions

In C++11 il tipo di ritorno di una lambda viene dedotto automaticamente se questa è composta di una sola espressione. Nel C++14 la deduzione del tipo di ritorno di una lambda viene estesa anche per casi i più complicati.

Ma non solo: la deduzione automatica è abilitata anche per le funzioni ordinarie, tramite due diverse notazioni:

// auto-semantics
auto func() {...}

// decltype-semantics
decltype(auto) func() {...}

Nel primo caso il tipo di ritorno è dedotto seguendo la semantica di auto (e.g. &-qualifiers eliminati), nel secondo quella di decltype. Vediamo un esempio con auto:

#include <iostream>
#include <type_traits>
#include <vector>
using namespace std;

template<typename T>
auto sum(T&& a, T&& b)
{
    return a+b;
}

int main()
{
    cout << "Summing stuff:" << endl;
    cout << sum(1, 2) << endl;
    cout << sum(1.56, 3.66) << endl;
    cout << sum(string{"hel"}, string{"lo"}) << endl;
}

Uno scenario nel quale questa notazione sarebbe poco appropriata (l’esempio non compila, di fatto):

#include <memory>
using namespace std;

struct Base {};
struct Derived : Base {};

shared_ptr<Base> share_ok(int i) // ok
{
    if (i)
        return make_shared<Base>();
    return make_shared<Derived>();
}

auto share_ops(int i) // ops
{
    if (i)
        return make_shared<Base>();
    return make_shared<Derived>();
}

int main()
{
    auto shared1 = share_ok(1);
    auto shared2 = share_ops(1); 
}

In ogni caso, prima di dare linee guida o suggerimenti stilistici, attendiamo di utilizzare questa feature in produzione.

Ed ecco auto e decltype(auto) a confronto:

#include <iostream>
#include <type_traits>
#include <vector>
using namespace std;

vector<int> v{1,2,3,4};

decltype(auto) getDecltype(size_t index)
{
    return v[index];
}

auto getAuto(size_t index)
{
    return v[index];
}

int main()
{
    auto val = getAuto(0); 
    auto anotherVal = getDecltype(1);
    auto& ref = getDecltype(0);
    ref += 10; // aka: v[0] += 10;
    cout << "copied v[0] = " << val << endl;
    cout << "copied v[1] = " << anotherVal << endl;

    cout << "final vector: [ ";
    for (auto i : v)
        cout << i << " "; 
    cout << "]" << endl;
}

Questo facile esempio mostra che, nonostante l’operator[] di un vector riporti una reference, la semantica di deduzione di auto vuole che i ref-qualifiers siano eliminati. Per questo getAuto() restituisce un int (per copia). Viceversa, con decltype(auto), vengono utilizzate le regole deduttive di decltype che preservano i qualificatori (e quindi il fatto che operator[] riporti una reference). Per una spiegazione più accurata di auto e decltype vi consigliamo questo articolo di Thomas Becker. Provate a giocare con l’esempio direttamente nell’articolo. Se siete soliti scrivere codice generico, troverete molto utili queste due novità!

Variable templates

Nel C++11 non c’è modo di parametrizzare una costante direttamente con i template, come invece è possibile per classi e funzioni. Generalmente vengono utilizzati dei workarounds come classi al cui interno sono definite una serie di static constexpr (e.g. vedi numeric_limits).

Dal C++14 è consentito definire delle constexpr variable templates (o solo variable templates), come ad esempio:

#include <iostream>
#include <iomanip>
using namespace std;

template<typename T>
constexpr T pi = T(3.1415926535897932385);

template<typename T>
T areaOfCircle(T r) 
{
    return pi<T> * r * r;
}

int main()
{
    cout << setprecision(10);
    cout << "PI double = " << pi<double> << endl;
    cout << "PI float = " << pi<float> << endl;
    cout << "Area double = " << areaOfCircle(1.5) << endl;
    cout << "Area float = " << areaOfCircle(1.5f) << endl;
}

La nostra carrellata del C++14 è completa. Non esitate a lasciare un commento con le vostre impressioni. E se un commento non vi basta, scrivete un intero articolo! Contattateci e vi aiuteremo a pubblicare il vostro “pezzo” su ++it – i mini-compilatori sono disponibili per tutti!

]]>
2614