Big data is driving the use of algorithm in governing mundane but mission-critical tasks. Algorithms seldom operate on their own and their (dis)utilities are dependent on the everyday aspects of data capture, processing and utilization. However, as algorithms become increasingly autonomous and invisible, they become harder for the public to detect and scrutinize their impartiality status. Algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices. Yet it is difficult to hold algorithms accountable as they continuously evolve with technologies, systems, data and people, the ebb and flow of policy priorities, and the clashes between new and old institutional logics. Greater openness and transparency do not necessarily improve understanding. In this editorial we argue that through unravelling the imperceptibility, materiality and governmentality of how algorithms work, we can better tackle the inherent challenges in the curatorial practice of data and algorithm. Fruitful avenues for further research on using algorithm to harness the merits and utilities of a computational form of technocratic governance are presented.