five times a

analyzing architecture at any angle

Category: algorithm

/5xa/ Graphics made with six axis industrial robot

Recently we took part in the exhibition called six.axis.made curtesy of Koło Imago.

Also we have created our website – http://aaaaa.io

source

link https://www.facebook.com/fivetimesa/photos/?tab=album&album_id=530062227189738

Advertisements

/txt/ The Feynman-Tufte Principle

A visual display of data should be simple enough to fit on the side of a van
By Michael Shermer on April 1, 2005

Credit: BRAD HINES

I had long wanted to meet Edward R. Tufte–the man the New York Times called “the da Vinci of data” because of his concisely written and artfully illustrated books on the visual display of data–and invite him to speak at the Skeptics Society science lecture series that I host at the California Institute of Technology. Tufte is one of the world’s leading experts on a core tool of skepticism: how to see through information obfuscation.
But how could we afford someone of his stature? “My honorarium,” he told me, “is to see Feynman’s van.”

Richard Feynman, the late Caltech physicist, is famous for working on the atomic bomb, winning a Nobel Prize in Physics, cracking safes, playing drums and driving a 1975 Dodge Maxivan adorned with squiggly lines on the side panels. Most people who saw it gazed in puzzlement, but once in a while someone would ask the driver why he had Feynman diagrams all over his van, only to be told, “Because I’m Richard Feynman!”

Feynman diagrams are simplified visual representations of the very complex world of quantum electrodynamics (QED), in which particles of light called photons are depicted by wavy lines, negatively charged electrons are depicted by straight or curved nonwavy lines, and line junctions show electrons emitting or absorbing a photon. In the diagram on the back door of the van, seen in the photograph above with Tufte, time flows from bottom to top. The pair of electrons (the straight lines) are moving toward each other. When the left-hand electron emits a photon (wavy-line junction), that negatively charged particle is deflected outward left; the right-hand electron reabsorbs the photon, causing it to deflect outward right.

Feynman diagrams are the embodiment of what Tufte teaches about analytical design: “Good displays of data help to reveal knowledge relevant to understanding mechanism, process and dynamics, cause and effect.” We see the unthinkable and think the unseeable. “Visual representations of evidence should be governed by principles of reasoning about quantitative evidence. Clear and precise seeing becomes as one with clear and precise thinking.”
The master of clear and precise thinking meets the master of clear and precise seeing in what I call the Feynman-Tufte Principle: a visual display of data should be simple enough to fit on the side of a van.

As Tufte poignantly demonstrated in his analysis of the space shuttle Challenger disaster, despite the 13 charts prepared for NASA by Thiokol (the makers of the solid-rocket booster that blew up), they failed to communicate the link between cool temperature and O-ring damage on earlier flights. The loss of the Columbia, Tufte believes, was directly related to “a PowerPoint festival of bureaucratic hyperrationalism” in which a single slide contained six different levels of hierarchy (chapters and subheads), thereby obfuscating the conclusion that damage to the left wing might have been significant. In his 1970 classic work The Feynman Lectures on Physics, Feynman covered all of physics–from celestial mechanics to quantum electrodynamics–with only two levels of hierarchy.
Tufte codified the design process into six principles: “(1) documenting the sources and characteristics of the data, (2) insistently enforcing appropriate comparisons, (3) demonstrating mechanisms of cause and effect, (4) expressing those mechanisms quantitatively, (5) recognizing the inherently multivariate nature of analytic problems, (6) inspecting and evaluating alternative explanations.” In brief, “information displays should be documentary, comparative, causal and explanatory, quantified, multivariate, exploratory, skeptical.”

Skeptical. How fitting for this column, opus 50 for me, because when I asked Tufte to summarize the goal of his work, he said, “Simple design, intense content.” Because we all need a mark at which to aim (one meaning of “skeptic”), “simple design, intense content” is a sound objective for this series.

 

source

link http://www.scientificamerican.com/article/the-feynman-tufte-princip/

/txt/ This MIT website will tell you how memorable your photos are using artificial intelligence

You know a memorable photo when you see one, but now so does a new artificial intelligence (AI) system called LaMem.

MIT’s website lets you upload your photos to try out the algorithm, which we first saw over at Discover Magazine.

To create LaMem, the researchers showed a random set of 60,000 images to users of Amazon’s Mechanical Turk site.

Read the rest of this entry »

/txt/ How to Have 
 a Bad Career 
 in Research/Academia 
 Pre-PhD and Post-PhD
 (& How to Give a Bad Talk) 
 David Patterson UC Berkeley November 18, 2015

Acknowledgments & Related Work
  • Many of these ideas came from (inspired by?) Tom Anderson, David Culler, Al Davis, Ken Goldberg, John Hennessy, Steve Johnson, John Ousterhout, Randy Katz, Bob Sproull, Carlo Séquin, Bill Tetzlaff, …
  • Studs Terkel, Working: People talk about what they do all day and how they feel about what they do. (1974) The New Press.
  • “How to Give a Bad Talk” (1983),
 http://www.cs.berkeley.edu/~pattrsn/talks/BadTalk.pdf
  • “How to Have a Bad Career” (1994), Keynote address, Operating Systems Design and Implementation Conf.
  • Richard Hamming, “You and Your Research” (1995),
 http://www.youtube.com/watch?v=a1zDuOPkMSw
  • Ivan Sutherland, “Technology and Courage” (1996).
  • “How the RAD Lab space came to be” (2007), 
 https://radlab.cs.berkeley.edu/wiki/space/history
  • “Your Students are Your Legacy” (2009)
 Communications of the ACM 52.3: 30-33.
  • “How to Build a Bad Research Center” (2014)
 Communications of the ACM 57.3: 33-36.

2

Outline

  • Part I How to Have Bad Grad Student Career,and How to Avoid One
  • Q&A
  • Part II How to Have Bad Research Career
  • Part III How to Avoid a Bad Research Career+ Richard Hamming (Turing Award for 
 error-detecting and error-correcting codes) 
 video clips from “You and Your Research” (1995)
  • Q&A
  • My Story: Accidental Academic (3 min)
  • What Works for Me (3 min)

3

Part I: Commandments on
to Have a Bad Graduate Career

I. Concentrate on getting good grades

  • –  Postpone research involvement: 
 might lower GPA
  • –  Aim for PhD class valedictorian!Alternative: Maintain reasonable grades– No employer cares about GPA » Sorry, no valedictorian

    – Only once I gave below B in grad course

    – 3 prelim courses only real grades that count

    – What matters: Letters of recommendation

    » From 3-4 faculty & external PhDs 
 who have known you for 5+ years

4

Read the rest of this entry »

/vid/ 就算是很基本的數學原理,還是覺得:這什麼巫術!?(嚇 – 簡君展

source

link https://www.facebook.com/john82827/posts/1028107863877438?pnref=story

post https://www.facebook.com/groups/ALGOinDSGN/permalink/1243992528951569/

/code/txt/ Apollonian Gaskets in Python

Apollonian Gaskets are fractals that can be generated from three mutually tangent circles. From these, more circles which fill the enclosing circle can be calculated recursively.

I implemented this in python as a command line program that saves those as svg images. Below I will explain the math behind it and show some images.

1   How it works

The process of generating an Apollonian gasket roughly works like this:

We start with a triple of circles, each of which touches the other two from the outside. Now we try to find a fourth circle that is also tangent to each of the three. It’s easy to see that there are two possibilities for this: Either it lies in the middle of the three given circles (externally tangent) or it encloses them (internally tangent).

Figure 2: Three and four mutually tangent circles

Left: Three circles with different radii. Each is tangent to the other two.

Right: Two possibilities for a fourth circle that is tangent to the first three. Externally tangent (pink) or internally tangent (green). Read the rest of this entry »

/vid/ How to Grow a Mind: Statistics, Structure and Abstraction

http://videolectures.net/nips2010_tenenbaum_hgm/

source

link http://videolectures.net/nips2010_tenenbaum_hgm/

post https://www.facebook.com/groups/PHILinDSGN/permalink/506478339518250/

/txt/ Edit Propagation using Geometric Relationship Functions

We propose a method for propagating edit operations in 2D vector graphics, based on geometric relationship functions. These functions quantify the geometric relationship of a point to a polygon, such as the distance to the boundary or the direction to the closest corner vertex. The level sets of the relationship functions describe points with the same relationship to a polygon. For a given query point we ?rst determine a set of relationships to local features, construct all level sets for these relationships and accumulate them. The maxima of the resulting distribution are points with similar geometric relationships. We show extensions to handle mirror symmetries, and discuss the use of relationship functions as local coordinate systems. Our method can be applied for example to interactive ?oor-plan editing, and is especially useful for large layouts, where individual edits would be cumbersome. We demonstrate populating 2D layouts with tens to hundreds of objects by propagating relatively few edit operations.

source

link https://www.cg.tuwien.ac.at/research/publications/2014/Guerrero-2014-GRF/

post https://www.facebook.com/groups/ALGOinDSGN/permalink/1212547125429443/

/vid/ Agent-Based Computational Design

source

link https://www.youtube.com/watch?v=isRoFIegXfk

post https://www.facebook.com/groups/ALGOinDSGN/permalink/1163213750362781/

/txt/ Yes, androids do dream of electric sheep

Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying

 

 

 A hallucinatory filter over a red tree. Spot the animals. Photograph: Google

The pictures, which veer from beautiful to terrifying, were created by the company’s image recognition neural network, which has been “taught” to identify features such as buildings, animals and objects in photographs.

They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition.

At a low level, the neural network might be tasked merely to detect the edges on an image. In that case, the picture becomes painterly, an effect that will be instantly familiar to anyone who has experience playing about with photoshop filters:

c567e337-cd5a-435a-8f74-a896fabcfd27-1020x344

An ibex grazing, pre- and post-edge detection. Photograph: Google 

But if the neural network is tasked with finding a more complex feature – such as animals – in an image, it ends up generating a much more disturbing hallucination:

A Knight, pre- and post-animal detection. Photograph: Google 

Ultimately, the software can even run on an image which is nothing more than random noise, generating features that are entirely of its own imagination.

dba52883-3a65-4e46-bb08-8c15678642b1-bestSizeAvailable

Before: noise; after: banana. Photograph: Google 

Here’s what happens if you task a network focused on finding building features with finding and enhancing them in a featureless image:

c5fe2bba-8035-4174-98da-ff896075b8c5-1020x680

 A dreamscape made from random noise. Illustration: Google

The pictures are stunning, but they’re more than just for show. Neural networks are a common feature of machine learning: rather than explicitly programme a computer so that it knows how to recognise an image, the company feeds it images and lets it piece together the key features itself.

But that can result in software that is rather opaque. It’s difficult to know what features the software is examining, and which it has overlooked. For instance, asking the network to discover dumbbells in a picture of random noise reveals it thinks that a dumbbell has to have a muscular arm gripping it:

d747a417-34cf-491d-9a8d-d76b49df17ca-bestSizeAvailable
 Dumbbells (plus arm). Photograph: Google

The solution might be to feed it more images of dumbbells sitting on the ground, until it understands that the arm isn’t an intrinsic part of the dumbbell.

“One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer may look for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, such as a door or a leaf. The final few layers assemble those into complete interpretations – these neurons activate in response to very complex things such as entire buildings or trees,” explain the Google engineers on the company’s research blog.

be8093f0-3cfd-4aa0-a7cb-7cba9abeeb2f-2060x1236

Another dreamscape. Photograph: Google

“One way to visualise what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation,” they add. “Say you want to know what sort of image would result in ‘banana’. Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana.”

The image recognition software has already made it into consumer products. Google’s new photo service, Google Photos, features the option to search images with text: entering “dog”, for instance, will pull out every image Google can find which has a dog in it (and occasionally images with other quadrupedal mammals, as well).

So there you have it: Androids don’t just dream of electric sheep; they also dream of mesmerising, multicoloured landscapes.

source

link http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep

post https://www.facebook.com/groups/ALGOinDSGN/permalink/1157595817591241/