Hacker Newsnew | past | comments | ask | show | jobs | submit | etendue's commentslogin

The report makes reference to "Assessment of Safety Standards for Automotive Electronic Control Systems" by NHTSA, which itself reviews ISO 26262, MIL-STD-882E, DO-178C, the FMVSS, AUTOSAR, and MISRA C.

In this context, they mean verification and validation in the systems engineering sense. Software would be included in that it is a part of the whole system.


I have a hard time understanding the current AV SW stack.

On one hand, at the low level, sensor, motor control, etc you likely have traditional hard real time/MISRA C code, but on the higher layers you probably things like DNN, image recognition, which are much less deterministic.

So I am not sure how do you reconcile these two worlds, and prove it is safe and always work in timely manner.

It seems the only sound approach would be to validate the whole system on a real road.


A few comments on this:

First, as etendue says, it is not easy. The problem of mixing “Boolean” verification with probabilistic, less-deterministic verification is especially hard. I discussed this a bit in [1], if you care to take a look.

Also, I think most current AVs are not driven by DNNs at the top level (comma.ai [2] is one exception). See [3] for some discussion of that, and of verifying machine-learning-based systems.

Finally, one possible way to check that AV manufacturers “do the right thing” in correctly verifying the combination of DNNs, Misra C, digital HW, sensors and so on is perhaps to create a big, extensible catalog of AV-related scenarios, which ideally should be shared between the manufacturers and the certifying bodies – see [4]. I think there is some hint of that in the DOT pdf – still working my way through it.

[1] https://blog.foretellix.com/2016/07/22/checking-probabilisti...

[2] http://www.bloomberg.com/features/2015-george-hotz-self-driv...

[3] https://blog.foretellix.com/2016/09/14/using-machine-learnin...

[4] https://blog.foretellix.com/2016/07/05/the-tesla-crash-tsuna...


Thanks for your input, really interesting topics on your blog as well.


Thanks. I did a second pass through the policy paper, and put a summary of the verification implications here: https://blog.foretellix.com/2016/09/21/verification-implicat...


I think the simple answer is that it is not easy. To start, rigorous design processes with risk analysis upfront are certainly necessary, as are well-defined operational contexts for the autonomous functionality, and a very disciplined approach to clearly defining safety-critical subsystems and minimizing their surface area.

There's a surprising amount of work in the literature that serves as a guide for using neural networks in safety-critical contexts, e.g., http://dl.acm.org/citation.cfm?id=2156661 and http://dl.acm.org/citation.cfm?id=582141.


Now you understand the job of systems engineering :)

Verify components, validate the entire system is the typical approach.


That sounds pretty much just like web application development, or any other front-end user-facing development. You can verify internal components through testing, but once you introduce non-deterministic random variables like browser software your user is using, and your user's themselves, all you can do is validate the entire system through real-world testing and hope you've covered the edge cases you need to handle and will fail gracefully for the ones you missed.


The point I was trying to make is that if you have actuators running MISRA C that are going to be driven by something written in Tensorflow, does it still makes sense to have a requirement to use MISRA C in the first place for the low level part ?


I'd be very wary of using complex SOUP like TensorFlow, even if brought under my quality system. I think a good answer here is that once one goes under design control the subset of functionality needed should be implemented in-house under the organization's SDLC.


Of course these things are meant to be used (1) to train the system, (2) as a player in the prototype. Exactly like in the old school ML-based systems: you train in Matlab or CudaConvNet, and then you load the trained classifier into the custom-made player highly tuned to your hardware and problem domain.


Most certainly - safety should be guaranteed at the lowest level, even if Tensorflow gets borked.

Think of it as a failure cascade - if Tensorflow breaks, the car can safely stop. If the low level stuff breaks, the car may not be able to stop (or go).


N.B., this policy is mainly concerned with Highly Automated Vehicles (HAVs), which are defined as SAE Level 3 ("capable of monitoring the driving environment") and above.

edit: as to SAE Level 2, it has this (and more) to say:

> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).

also,

> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.


I think this point from the post deserves emphasis, for consideration in future discussions:

> If Tesla makes a "major" change to software, pretty much all bets are off as to whether the new software will be better or worse in practice unless a safety critical assurance methodology (e.g., ISO 26262) has been used. (In fact, one can argue that any change invalidates previous test data, but that is a fine point beyond what I want to cover here.) Tesla says they're making a dramatic switch to radar as a primary sensor with this version. That sounds like it could be a major change. It would be no surprise if this software version resets the clock to zero miles of experience in terms of software field reliability both for better and for worse.

I am admittedly a broken record on this point, but Tesla moves very fast and very nimble on a system that is under design controls. It would be very educational to learn about their development process and how it maps to ISO 26262.


Will ethics (e.g., the collection and storage of personal information, or professional responsibilities in regulated industries) be given any treatment?


Hey! Great question. We strongly believe that it's important to understand the ethical implications of such technology. However, we don't believe we are experts or authorities on the ethical implications and as such will provide recommended readings but not cover it ourselves.


Could you go into a little more detail here? Thanks!


Thinking of training data sets as an example: they contain a lot of personal information (faces, license plates, times, locations, speeds, etc.). Depends on jurisdiction, but there likely aren't legal issues in amassing and using this data when it is collected in plain view from public areas; I envision possible ethical issues though. Could you release training data for public use, without obscuring identifying information? There may not be legal issues in releasing raw data, but keeping in mind that the information is sensitive and recorded without consent, and that an engineer's first duty is to the public health, safety, and well-being, might there be a professional obligation to sanitize the data?

My example might be a bit contrived, but I think there are going to be many valid (and far better!) questions in this discipline that should be asked and considered, and I think your graduates need to be equipped to do so.


It isn't just ethics; in Europe, at least, it's regulatory. (See GDPR and it predecessors).

The response from Udacity suggests not.


How would one go about meaningfully contributing to solving problems in genetics without having done the work leading to a MD or PhD (or both)?


The easiest way that probably anyone on HN (who can fizzbuzz) can help is with data management. So much stuff is still done by hand that could be easily scripted.

Researchers in our institute were amazed how easy it is to use e.g. google forms to gather data in a reasonable format. Once you get data in a reasonable format you can help them with transforming it/joining it with other sources/cleaning them up. ETLs and data integration are often completely foreign concepts to them.

And that's researchers, you might still start calling them quite computer-competent after you talk to the people in the clinic. All the research is for nothing if it's not brought to "bedside" to benefit the patients in a clinical setting, outside all trials. For that you need to make sure genomics pipelines are automated and reproducible and only clinically relevant information gets to the oncologists (or other doctors) deciding on treatment. This is still not quite there even in the best places.

I think most of the really world-changing stuff will just be hard work on relatively easy problems. It's hard to get excited about these (compared to the latest neural networks or distributed high performance systems) but they need to get done


I'm not sure you would. I mean, I'm sure you could somehow, but at this point so much of what needs to be done is basic research, and really can be done well in that context. There aren't many things that are ready to leave the context of a research lab and into commercialization. We've got some notable disasters with Theranos, and even the YC funded Taxa (glowing plant - that was a farce from the get-go, but they're doing some potentially interesting stuff now).

As far as education, it's not something you can learn by yourself, it just isn't. Most of the methods in a biological wet lab are very far from standardized and need a great deal of troubleshooting. Most post-docs in a new lab spend a couple months just trying to get basic stuff working that they've done dozens of times before. It's hard. You need people around you with experience and perspective, and doctorate programs are likely the only place you're going to get that kind of training.

I think there are a lot of people that want to approach biology with a CS mindset, especially the people interested in synthetic biology, but that rarely bears fruit. It could get to that place eventually, but there's a lot of ground to cover. In that sense I agree with Elon that, despite the huge impact genetic engineering could have, it's not the next thing because we're not ready yet. There's still too much that's fundamental to biological problems that we simply don't understand, and solving things in one species usually doesn't translate very far across taxa.


>leave the context of a research lab and into commercialization. We've got some notable disasters with Theranos

Did that spring from a research lab or from a happyhour with mba types wanting to jump on "start-up" fortunes


> Most of the methods in a biological wet lab are very far from standardized and need a great deal of troubleshooting. Most post-docs in a new lab spend a couple months just trying to get basic stuff working that they've done dozens of times before.

Having had experience with syn bio in grad school and trying to reconcile the empirical (biology) and first principles (CS/math) approaches, I've been thinking a lot lately about how to streamline the troubleshooting process for picking up and optimizing wet lab methods. I'd love to chat - my email's in my profile.


This is why Theranos was such an effective scam: the current culture of "innovation" is so heavily based in software, an unconstrained space where a creative wunderkind can make great advances, it thinks all problems can be solved through sheer thinking outside the box, "disruption," and dreaming big. Those are all good things to try, but I don't think it's a coincidence Silicon Valley-based big-dreaming startups aren't doing nearly as well as big, boring research labs with heavy understanding of the science and measured goals.


You probably couldn't. But if you refer to two things Musk said - a) genetics is important, and b) PhD is not the best way to be useful - I think he didn't mean it to be taken together. He spent some time talking about how being useful means "area under the curve" - do a big thing for small number of people, or do a small thing for a large number of people. Most people can aim for either of the two, and in both cases PhD is probably not the most efficient use of your time.


Genetic research uses computational techniques today. However, most academics who understand genetics well are crappy programmers. My source for this is a friend who is a tenure-track professor of evolutionary biology at a major university, with publications based on computational analyses of genomes. In pulling those publications together, he inevitably had to spend a lot of time time reviewing and cleaning up the terrible code of his co-authors, checking for correctness. "And I'm not even good at coding," he said. "That's how bad this stuff was!"

So, I think there must be a role for strong developers to partner with strong genetic researchers to make the best use of computers for research. That role might not exist now--you might have the opportunity to go create it. But it does seem sorely needed.


Genetics startups still need engineers and product people.

ex. Counsyl (https://www.counsyl.com)


I'm not sure if you would consider this meaningful, but if you have software development skills, either developing applications that can be used to solve problems in genetics, or that save the time of those working on solving those problems.


So, I can't answer about solving big problems, but I did genetic engineering research in grad school on bacteria. One could very easily conduct serious genetic engineering in one's bedroom for less that $500 or so. Of course this is fairly basic stuff, but still, you'd be amazed what is possible with very little equipment.

For example, there is a yearly competition called iGEM, which is synthetic biology competition for undergrads. Some of the stuff they do with limited resources is quite impressive.

http://igem.org/Main_Page


You can definitely contribute. If you're a software engineer, then you could join a research lab as a scientific programmer. Good labs are well-funded in these areas and will have funding to cover salary for a programmer to implement data analysis pipelines, polish research software and make it publicly available etc.

Alternatively if you're a software engineer or a product designer, or many other roles, then you could join a company working on commercializing genetic medicine. They're are lots and those companies are definitely not just looking for people with PhDs.

Once in a place like that, you'd be able to chat further with people about your career direction.


Moreover.. Musk said he didn't anticipate being involved in all 5 things he thought about in college, including genetics. What is he working on that's genetics related?? Did he just misspeak?


Consider computational biology. There are lots of problems which hinge on understanding the impact of genetics on populations and variations in genetics and that effect. As there are already sources of genetic data sets and infrastructure to generate those data sets, genetic research becomes more of a data science problem than a medical problem.


This is the part I somewhat disagree. I've seen lots of strong computational biologists make the leap into generic data science, but I've seen way too many CS/data science types struggle. They take the data at face value, not recognizing the fact that biological data has flaws. A sound understanding of biology/chemistry helps a lot with identifying those flaws and generally with designing experiments/research.

Admittedly that's a bit of a generalization and I am sure there are a decent number of exceptions but consistent with my experience.


That is mostly because those "types" as you call them didn't take math courses or slept through them, or don't use the math tools in everyday work - because it's not needed.

In other words, they're not Computer Scientists. They are Computer Programmers instead. (or maybe Computer System Engineers)


I believe that the comment you are responding to was speaking to the asymmetry that people with experience in computational biology have an easier time moving to general data science problems than do people with experience in general data science working on computational biology problems.

I agree that the asymmetry exists: there is a tremendous baseline of scientific knowledge and experience that is needed to make significant contributions to the field. I personally have worked with people with backgrounds in programming or CS on medical problems, and it has been frustrating because they lack what I would term "scientific common sense". I would personally prefer, and would be able to make more progress with, working with (for example) anyone who has completed a sequence of education sufficient for pre-med requirements and has some programming experience over a "full stack data engineer". Even if someone with a programming or CS background were inclined to pick up the textbooks and amass the baseline scientific knowledge (I'm sure they exist, although I haven't met them yet), they'd still lack the years of laboratory work and experience of applying this knowledge.

My original comment was apparently poorly worded because it was interpreted by the responders differently than I intended, but delightfully, it resulted in very thoughtful comments. I am very skeptical that one can make even small contributions to genetics without the experience of years of specialized work. There are ancillary problems that could be done by someone with a programming or CS background, e.g., a better LIMS system, or perhaps protocol management, but I don't see those tasks as leading to later making meaningful contributions to the field of genetics. The MD or PhD isn't required, but all the work done leading up to it is, and so as I see it those prepared to make the contributions are most likely going to have gotten the degree on the way.


Indeed, the main problem in genetics are not related to handling data, but require major experimentation, even at cellular level, not to mention higher ones.

Not much CS can help with right now - the most useful tools (mass fuzzy searches and molecular simulations) are already there.


Demolition Man, I Robot, or Minority Report


It disturbs me that all the examples so far are science fiction. If people are inferring real-world capabilities from SF movies then we're totally screwed!


> If people are inferring real-world capabilities from SF movies then we're totally screwed!

https://en.wikipedia.org/wiki/CSI_effect


Other way round: real-world capabilities are built out to meet the SF expectations. Star Trek communicators -> mobile phones. Star Trek computer -> Alexa/Siri et al.

(Besides, the alternative is for the layman to infer capabilities from marketing material, which is even more outlandishly fictional)

Oh, and if you want an exhaustive list of when something appears in media, TVtropes will always deliver: http://tvtropes.org/pmwiki/discussion.php?id=6zj22p5kr47sfh2...


I don't see why my examples should in any way "disturb you". Science fiction frequently is an inspiration for products and ideas that are made real, examples are legion.

I judiciously selected I Robot and Minority Report because Audi and Lexus, respectively, had product placement for future design concepts.


If these movies are in fact the reason why people overestimate the capabilities of Tesla's system, then that means people's inability to distinguish science fiction from reality is getting people actually killed. That's pretty disturbing! Note, I'm not saying it's wrong, just awful.


I am not trying to imply that people are having difficulty distinguishing fiction from reality.

I am noting that real auto companies have deliberately placed product concepts in media to prime people's expectations of what future products will look like and what they will be able to do. Independently, there are also proper level 4 systems under active development getting plenty of popular press coverage.

I also note that the very public face of Tesla frequently makes very public and (in my opinion, overly optimistic) declarations of their product's capabilities both present and future, for example (Jan 2016), "The Model S is 'probably better than humans at this point in highway driving' according to Musk." [0]

It doesn't strike me to be all that far of a leap for an average person to conclude that the future has arrived.

[0] https://www.theguardian.com/technology/2016/jan/11/elon-musk...


automatic washing machine

automatic login / logout / lock

automatic debit of monthly bills

automatic reply to calls / emails

automatic plant watering


> automatic washing machine

Will happily flood the house if something goes wrong.

> automatic login

Frequently requires manually reentering credentials.

> automatic debit of monthly bills

Will happily send your life savings to the power company if they typo a couple of extra zeroes on your bill.

> automatic reply to emails

Notorious for massive screwups when people enable them without adequate controls, especially when multiple people on the same mailing list do it at the same time.

All of these examples seem to reinforce the idea that "automatic" does not mean "requires no human supervision."


The day Tesla's helper requires as much supervision as my washing machine or bank or email client, maybe they can claim naivete. Today, it's a cavalier and underhanded marketing tactic.

You're exactly the kind of fanboy I accused you of being, btw. Proper rules of engagement on HN be damned: you're wasting everyone's time with goalpost-moving and dis-ingenuousness.


That's a little harsh, both in general and specifically here. I've observed 'mikeash to be relatively level-headed in his comments here, even against the general baseline of HN's usually quite high level of quality; while he does make occasional excursions into unreasonability, so do we all, and I think it's worth making allowances for such things in the cause of a process which, in my experience, generally results in a stronger understanding of an issue all around.

If you feel your time is being wasted, perhaps you may wish to consider making the choice to spend it otherwise. Were it I who felt that way, I might still hesitate to generalize from myself to everyone, no matter how justified I might imagine myself to be in so doing.


No, I quite like discussion forums. Learned a lot from them. I just like when the discussions are charitable and productive, and when people argue in good spirit.

It's easier to pick up on the hostility of a sharp barbed comment than the one in a smarmy, disingenuous one, that only insults by implication, but it's still there.

Asking where people get such impressions, then mocking the first few responses citing sci-fi with a "that's a bigger problem!", then getting some current day ones, and proceeding to very weakly deflect them with excuses that don't stand up to scrutiny, doesn't seem like someone interested in understanding a situation.

It seems like someone trying to preach their opinion, while disguising it as a learning process, taking advantage of this forum's predilection for politeness.


What exactly is wrong with saying "that's a bigger problem" when sci-fi examples are posted? I didn't say they were wrong, I just recoiled in horror at the idea.

Am I not allowed to express my reactions to new ideas now? Or is "that's disturbing" somehow considered to be an indication of dismissal or disagreement?

I don't find these current day examples to be at all compelling. You think my response consists of excuses that don't stand up to scrutiny, I think they point out how automation has traditionally never been fully autonomous. The "they're getting it from sci-fi movies" is a lot more convincing, just horrifying.


> Or is "that's disturbing" somehow considered to be an indication of dismissal or disagreement?

Actually, you said "It disturbs me that all the examples so far are science fiction." which (a bit out of context and ignoring the principle of charity) could be interpreted as a cursory dismissal along the lines of, "these examples are too ridiculous to consider further".

Even with the principle of charity, I find "they're getting it from sci-fi movies" to be an unfair summary of my point, but perhaps I'm doing a poor job making my case clearly.

Thinking on it, would it be fair to expect any example of an autopilot function on a car to be from a type other than science fiction or fantasy?


I see what you mean, sorry to have been unclear. I think you see what I intended now.

You probably can't find it on a car without getting into SF, but an excessively sophisticated autopilot on an airplane could probably fit into an otherwise non-SF movie. Like the example of Airplane! but played seriously. I just assumed that's the sort of thing people were talking about with the idea coming from entertainment, since "obviously" technology in an SF movie is going to be unrealistic.

There certainly are some similar misconceptions generated by movies not usually considered to fall into the category of SF. For example, I'd wager a lot of people think spy satellites really can read a license plate from orbit, or that silenced guns just make a soft "pew" sound.


> Will happily flood the house if something goes wrong.

It isn't supposed to flood the house. It is designed to be automatic. Meaning no supervision. Tesla's Autopilot isn't designed this way yet carries the name.

Let's keep going.

> Will happily send your life savings to the power company if they typo a couple of extra zeroes on your bill.

This has nothing to do with automatic payment being poorly designed. It is designed to pay the balance without your intervention...hence automatic.

> Notorious for massive screwups when people enable them without adequate controls, especially when multiple people on the same mailing list do it at the same time.

Depends on how it is implemented. There are usually more knobs to this feature and it is designed to have you program it. Based on programming, settings, etc, it will do whatever you set it to without you having to be there...hence automatic reply.


I'm not saying automatic bill pay is poorly designed, I'm saying it requires supervision to make sure it doesn't do anything stupid. We don't expect "autopay" to be foolproof, we expect it to handle the mundane stuff.


It is designed to be foolproof. It alerts you when banking details fail. It alerts when it makes a payment. The system also sends you a bill email before it even makes a payment a few weeks before in case you want to dispute charges. The system however is to pay bills, and when configured correctly, it does it without you having to intervene at all. Not sure why automatic bill pay requires humans to get involved.

Your bill and the automatic bill pay system are two different concerns here.


If it was designed to be foolproof, they wouldn't bother sending you emails before they make payments.

It works fine when the situation is good, and it can fail when it encounters troublesome situations in the real world. Since it operates in the real world, it needs some monitoring.


The attrition rate for a PhD program isn't 90%, but 50% is not unknown; my own was around 30% (measured from matriculation to defense, my class year). Some students were forced out of the program, others left on their own accord.

Also, it is unusual for a dissertation to be outright rejected because of how it reflects on the advisor and committee: the committee is (supposed to be) kept up to date on the student's progress and will recommend against defending if the student is unlikely to pass. Slightly less unusual would be a student being allowed to defend, but then needing to do major revisions to their dissertation for it to be accepted. Keep in mind that at the point one is defending, quite a bit of time and money has been invested in the candidate so there is a good incentive to see the candidate succeed for no other reason. Unsuited students are (ideally) dismissed much earlier, i.e., at admission to candidacy.

One absolutely worries about being scooped on papers, since those are the currency of academia and being scooped usually results in needing to publish your own (now less novel) work in a lesser journal. And as another commenter points out: a professor taking on 10 students with only 1 succeeding, if one defines success as being tenured, isn't that far off from reality.

As an aside, I personally think forming a research group at a university isn't all that different from creating a startup.


I've known students who defended their thesis and were told to do major revisions. Typically, it's because their thesis supervisor didn't do their job properly as they should know not to send that student to defend.

You're right in that they weed Ph.D. students out earlier, during their comprehensive exam. How it's done varies from department to department and university to university. My comprehensive was a lengthy oral exam by my committee with two rounds of questions. The first on background and the second on the written thesis proposal I submitted. I went for 3.5 hours straight, basically until the committee wanted lunch.

Equating a research group to a startup isn't a bad analogy. One of the professors in my department basically uses his students to do research for his company. He even makes them sign over the IP rights to him. Other professors have a continuing line of research across a number of students. Even my Ph.D. thesis was the latest in a number of theses on the same topic, each getting progressively more advanced. My thesis basically finished that line, with other related ones opening up as a result.


Nope, research groups/projects at uni is all about milking money from grants. Running startup is all about making money for investors. Direction is different and risk much lower.


This is an absolutely incorrect assertion: drug discovery is but one part of the drug development process. You're ignoring (among other things) the work in medicinal chemistry, retrosynthesis, and scale up/chemical engineering that go into creating a drug. The compound at the end of the development process rarely looks like the one identified in initial screening, and the work done to manufacture it and bring it to market unquestionably count as inventive.


> The $999 price point is designed to be affordable, and is possible because of the components Comma uses in its product, which tend to be inexpensive off-the-shelf electronics.

"Inexpensive off-the-shelf electronics" aren't rated for automotive environmental conditions, nor do they have the immunity to interference (e.g., single event upsets) required for safety-critical systems.


Off-the-shelf in the context of electronics means that they didn't custom design some parts of it (for example using a SoC module rather than custom DDR layout), not that the parts are consumer rated.


It is more than just parts though. To get past the more demanding EM compatibility/immunity certs you have to design the whole product with that in mind, at which point it stops being off-the shelf or inexpensive.


Which SoC is it? Jetson?


Snapdragon 820


Nice. Same as these guys? http://www.nauto.com/

So you could basically deploy this as software only, on recent 820 based smartphones? Just need a fish eye lens attachment and CAN bus to USB cable (or USB OTG GPIO)


Just wait until Comma.ai causes an accident and the NHTSA finds they can't do an analysis of the system because that system is an end-to-end neural network.


I doubt I would, even though I use Python for other purposes.

Lua isn't the only one in this space: I personally use Tcl (or Jim Tcl) for this sort of work, and it (IMHO) excels in this area. What advantages would you see your Python implementation having over Tcl or Lua which were designed with this area in mind?


What advantages would you see your Python implementation having over Tcl or Lua which were designed with this area in mind?

Python is a much more popular language than Tcl or Lua. Embedding Python opens your application to scripting by many more people, because the average person wanting to write a script is more likely to be familiar with it. Even those familiar with multiple languages may prefer to use Python - Tcl and Lua are often considered to be slightly "quirky" languages.

Also, although an embedded Python interpreter is never going to run C libraries like NumPy, there are many pure-Python libraries out there which would potentially be available for scripts to use.


> Embedding Python opens your application to scripting by many more people

I don't think this is realistic, anyone who wants to script an application will use what ever is exposed by the developers no matter how bad it is (ie visual basic for applications, and redis with lua).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: